Andrew Harcourt
  • Personal
    • Posts
    • Life, the universe and everything
    • On the web
    • andrew.b.harcourt on Threads
    • andrew.b.harcourt on Instagram
    • Andrew Harcourt on Strava
    • Andrew Harcourt on Spotify
  • Photo albums
    • Out and about
    • 14th-28th January 2025 - Tour Down Under
    • 22nd September 2024 - Toohey Trail Run
    • 24th August 2024 - Beerwah
    • 11th-26th January 2024 - Tour Down Under
  • Software
    • Posts
    • Articles
    • Have me speak
    • Ask for the unreasonable… and then get out of the way
    • 'Good enough' software
    • Agility in business: what we can learn from the software industry
    • How to engage with the software industry
    • People are a distributed system
    • Messaging patterns for scalable, distributed systems
    • Inversion of control from first principles
    • Your domain model is too big for RAM (and other fallacies)
    • Back to basics: simple, elegant, beautiful code
    • How may I help?
    • Let's talk strategy
    • Coffee with Andrew
    • Project rescue
    • On the web
    • uglybugger on GitHub
    • Andrew Harcourt on YouTube
    • uglybugger on Twitter
    • uglybugger on SlideShare
    • Andrew Harcourt on LinkedIn

Software Engineering

If a process has a name, it's an entity

28 February 2020 06:41 AEST
Event choreography Process orchestration Event-driven architecture Software engineering Domain modelling AAA Single Responsibility Principle

There are many flavours of event-driven architecture. I tend to speak and write a lot about event choreography, as 1) it’s a pattern that allows for effective separation of concerns, and 2) it’s a more accessible introduction to event-driven architectures than things like event sourcing. (Although I do make noise about event sourcing, too.)

In particular, in-process event choreography is extremely useful as it allows different parts of the same codebase to take action without being directly coupled together, but still allows for all of the state changes to be committed within a single transaction. Similarly, for distributed systems, event choreography is a powerful pattern to deliver simple, scalable workflows across disparate parts of an organisation.

Choreography versus orchestration

It is, however, important to draw a distinction between choreography and orchestration. If the system that will perform the action makes the decision, it’s choreography. If the system that will perform the action gets told to do it by another system, it’s orchestration.

The key is which system decides that an action should be taken.

  • Choreography is when a system takes independent action of its own volution as a result of the action of another system.
  • Orchestration is when some kind of process manager instructs a system to take action, usually as a result of an action of another system.

So which should I use? How do I decide?

This brings us to the question of which we should use and when. A simple rule of thumb is if a business process has a name and some state, it’s an entity. When we speak in terms of rules, e.g. “When a customer signs up, send them a welcome email” then choreography is a good candidate. When we speak in terms of process names, e.g. “Your application process is at the ‘Pending approval’ stage” then it’s a good hint that we’re in orchestration territory.

How do we model an orchestrated process?

We’ve already established that an orchestrated process has both state (“Where is this process up to?”) and behaviour (i.e. state transitions). This sounds suspiciously like… a domain entity.

If we accept that our process is a domain entity, many things fall into place. For one, we can query its state, and we can instruct it to change state. It can also refuse to undergo an invalid state transition, e.g. a call to a LoanApplicationProcess.Approve(...) should probably fail if LoanApplicationProcess.CurrentState == LoanApplicationState.NotYetSubmitted.

Doesn’t process orchestration mean I need an enterprise service bus? Should I get one?

No matter what problem you have, an enterprise service bus can always find a way to make it worse.

Thinking purely in terms of domain-driven design, we already have all of the puzzle pieces we need to solve this problem: domain logic and state persistence. This sounds suspiciously like it woudld fit into the normal structure of our domain model. We presumably have a repository implementation of some sort, and a persistence layer.

When our process needs to respond to an external event, we can use a vanilla event handler to hydrate our process instance and dispatch the event to the process.

The only missing piece is when we have a process that is dependent on time, which can be solved quite easily with a metronome which hydrates every non-finished process entity and calls a .TimePasses(DateTimeOffset now) method on it.


The cat sat on the mat.

26 November 2019 07:39 AEST
Software engineering Domain modelling Event-driven architecture Event choreography AAA Single Responsibility Principle

… a facetious look at the nature of events.

Anything that happens, happens.
Anything that, in happening, causes something else to happen, causes something else to happen.
Anything that, in happening, causes itself to happen again, happens again.
It doesn’t necessarily do it in chronological order, though.

– Douglas Adams, Mostly Harmless

Principles of expressing events

  • Events are immutable facts about the past.
  • The system that undergoes the state change is responsible for publishing the event.
  • The system publishing the event is responsible for defining the structure of the event.

I’ll deal with each of these principles individually - and slightly out of order. Please forgive the poetic licence.

Principle: The system that undergoes the state change is responsible for publishing the event.

Whilst modesty is admirable in a human, in software it is confusing. The component that has undertaken the action or undergone the state change should be the one to publish an event saying so. Our software should shout about its own doings, and not expect the town crier to follow it around.

Our code should follow the Assert. Act. Announce. pattern. Specifically, it should assert its preconditions, perform precisely its appointed task (and nothing more) and then announce that that task has been performed.

If we have some code that looks like this:

var cat = CatFactory.SpawnOne();
var mat = MatFactory.WeaveOne();
cat.SitOn(mat);

then our Cat implementation could look something like this:

public class Cat
{
    // ...

    public void SitOn(Mat mat)
    {
        // Assert
        if (_remainingLives <= 0) throw new DeadCatException("Dead cats can't sit on things.");

        // Act
        _currentlySittingOn = mat;
        mat.BeSatUponBy(this);

        // Announce
        DomainEvents.Raise(new CatSatOnMatEvent(this, mat))
    }
}

Crucially, the Cat is the one that has undergone the state change so it is the one that emits the event. (Side note: the Mat might also change state by virtue of being sat upon, in which case it should emit its own event.)

Principle: The system publishing the event is responsible for defining the structure of the event.

… and for honouring that contract.

The system that underwent the state change is the one that knows what happened. It is therefore the best placed to define the structure of the event representing that state change. Crucially, the publishing system should not attempt to cater to downstream subscribers who might desire additional information.

Consider the following event:

public class CatSatOnMatEvent
{
    public Instant Timestamp {get; set; }
    public Guid CatId { get; set; }
    public Guid MatId { get; set; }
}

This structure is quite terse but describes exactly what happened and only what happened. This is a Good Thing.

What happens, though, when a downstream subscriber wants to know the cat’s name? We end up with something like this:

public class CatSatOnMatEvent
{
    public Instant Timestamp {get; set; }
    public Guid CatId { get; set; }
    public string CatName { get; set; } // not a good idea
    public Guid MatId { get; set; }
}

What could be so harmful about that? Let’s go one further and suggest that another downstream subscriber wants to know the colour of the mat upon which the cat sat:

public class CatSatOnMatEvent
{
    public Instant Timestamp {get; set; }
    public Guid CatId { get; set; }
    public string CatName { get; set; } // not a good idea
    public Guid MatId { get; set; }
    public Color MatColor { get; set; } // also not a good idea
}

All of a sudden, we discover that our publisher will be responsible for marshalling all sorts of additional information just in case a subscriber wants it. The first subscriber doesn’t care about the colour of the mat and the second doesn’t care about the name of the cat, yet the publisher ends up being obliged to collate both every time it publishes an event just in case a subscriber wants it. This gives the publisher potentially as many reasons to change as there are subscribers - exactly the opposite of what the Single Responsibility Principle advocates.

What the publishing system should be doing is including only the information 1) about which it is authoritative; and 2) necessary to convey the essence of what happened. The system that cares about the name of the cat can then call a service to retrieve the name of the cat; likewise for the system that cares about the colour of the mat.

The publishing system is responsible for honouring that event contract for as long as agreed with its downstream subscribers. For an in-process event a refactoring could be done in a single commit, but for a business event published to a Kafka stream it might need to honour the contract for weeks, months or even years. It is the responsibility of the publisher to honour this contract for as long as its SLA specifies. (Site note: contract tests and/or schema registries will come in useful here.)

Principle: Events are immutable facts about the past.

An event should describe a thing that happened. It must have actually happened at a specific time. Events are expressed in the simple past tense.

The cat sat on the mat.

This is a good event. It’s expressed in the simple past tense. It unequivocally happened. Anything that may have happened after this event does not change the correctness of this event.

The cat is on the mat.

This sentence is in the present tense.

We could validly fire this event infinite times per second provided that there were, indeed, an appropriate cat sitting upon an appropriate mat. It would not be helpful. It would be much more helpful if we were to announce (once) at the moment that the cat were to first sit on the mat that it had done so.

The cat was sitting on the mat.

This is in the past progressive tense. It’s in the past, true, but it describes something that was happening and continuing to happen. Provided that the statement was once true (i.e. that the cat, once upon a time, sat upon the mat), the statement will always continue to be true. While a true statement, it is not helpful.

The cat sits on the mat.

Does it? When? Is it there now? Does it just generally sit on the mat? Do all cats sit on all mats?

The cat will sit on the mat.

Are we sure? How sure? Are we sure that it’s going to sit on the mat at precisely the timestamp on the event? It is a cat, after all. What if it gets hungry? Or bored?

This event is neither immutable (it might or might not happen) nor about the past.

The cat would have sat on the mat.

… but it didn’t because cats are fickle. This is not a thing that happened, therefore it’s not a good candidate for an event.

The cat should sit on the mat in ten minutes.

This is a command, not an event. It’s a valid command but it’s not a fact about the past.

Some other considerations

Commands should not be dressed up as passive-aggressive events.

We want our event publishers to observe the Assert. Act. Announce. pattern. Specifically, the publisher of an event should actively not care about whether any downstream consumers take action as a result of the event.

When an upstream system requires a downstream system to take a specific action as a result of something that happened, it should send an explicit command (either synchronous or asynchronous) to that system rather than announcing an event and then waiting expectantly.

Dinner is served, cat!

This is a passive-agressive event. The publisher of the event is waiting expectantly for their cat to come and eat its dinner, without having expressed that expectation.

Cat was requested to eat dinner.

This is also a passive-aggressive event. It expects a specific action to be taken and it is likely that the publisher of the event is waiting for that action to be completed.

Cat! Come and eat your dinner!

This is a perfectly acceptable command; it’s just not an event. It is an imperative instruction for an entity (cat) to undertake an action (come and eat dinner). It is entirely reasonable to send such a command; just not to present it as an event.

Think about your events’ granularity.

It is useful to structure events according to business events that actually happened and that people would talk about in real terms. It is likely that downstream systems will take different actions based on different events and don’t want to process “catch-all” events that they then have to further filter.

For a more serious example, consider entity-named events (e.g. CustomerEvent) that describe something vaguely related to a customer and, worse, generic CustomerUpdated or event just CustomerUpdate “events” that neither describe what happened or even assert that something did, in fact, happen.

A downstream system will most likely take very different actions as a result of a CustomerCancelledAccountEvent (perhaps sending an apology letter) than a CustomerChangedEmailAddressEvent (sending an alert email to the old address), and a different action again as a result of a CustomerConvictedOfFraudEvent (perhaps suspending shipments to that customer). If all of those happenings are wrapped up in a single CustomerUpdated or CustomerEvent then 1) too much responsibility is pushed onto each subscriber; and 2) the event payload will bloat in order to accommodate all of the different information required for all the downstream subscribers.

Statements about the future are not events.

We see many examples of these kinds of anti-patterns in transaction-processing systems.

For example, future-dated transactions are inserted into a table in the same way as past-dated transactions (strike one: not about the past). Almost invariably they are then updated with an IsProcessed flag or similar when they are actually processed (strike two: not immutable).


Assert. Act. Announce.

14 November 2019 06:02 AEST
Software engineering Domain modelling Event-driven architecture Event choreography AAA Single Responsibility Principle

A friend and colleague, Steve Morris and I, have been attempting to propagate the naming of this pattern for quite some time and it’s about time I wrote about it. In the vein of the AAA pattern of unit testing, we’ve been promulgating the AAA of code responsibility: Assert. Act. Announce.

  1. Assert. Check your preconditions.
  2. Act. Do the thing. And just the thing. No more.
  3. Announce. Shout that the thing was done.

Consider the common case of a customer’s changing their subscription level to an online service. Let’s say that the simplified sequence looks something like:

  1. The customer changes their subscription level.
  2. The customer is sent a notification email.
  3. The customer’s billing schedule is adjusted.

We could model it something like this:

public class CustomerService : IAntiPattern
{
    IRepository<Customer> _customerRepository;
    IEmailSender _emailSender;
    IBillingService _billing;

    // ...

    public void ChangeSubscriptionLevel(Guid customerId, SubscriptionLevel subscriptionLevel)
    {
        var customer = _customerRepository.Get(customerId);
        customer.SubscriptionLevel = subscriptionLevel;
        _emailSender.SendSubscriptionChangedEmailTo(customer);
        _billing.ReschedulePaymentsFor(customer);
    }
}

There are several problems with this approach:

  • It’s slow. The customer has to wait until an email is generated and sent before they can be signed up.
  • It’s fragile. If the email server is down, the operation will fail.
  • To add a step to the process, we need to change existing code.
  • The code is resistant to change.
  • The CustomerService becomes a dumping ground for all things customer-related.
  • When we add behaviours to the CustomerService, our existing tests break.
  • Anything can put our Customer instance into an invalid state just by modifying its properties.

If we were to add to the sequence so that it now looks like this:

  1. The customer changes their subscription level.
  2. The customer is sent a notification email.
  3. The customer is sent a notification text message. (New!)
  4. The customer’s billing schedule is adjusted.

then a couple of things happen. The first is that we need to change our existing CustomerService to add behaviour. We’re going to need to change the ChangeSubscriptionLevel method, and we’re also going to need to change the constructor to include some kind of ITextMessageSender. That means that all of our existing unit tests will immediately fail to compile.

Think about what we’ve just done. We’ve not changed the correctness of our existing code yet our unit test are failing. In other words, we’re teaching ourselves the lesson that whenever we add functionality, existing tests break. This is exactly the opposite of what we want, which is that tests only break when we’ve broken the functionality that they specify.

Enter the AAA pattern: Assert. Act. Announce.

If, instead of our CustomerService, we just modelled a customer with their behaviours, what might be different?

public class Customer
{

    // ...

    public void ChangeSubscriptionLevel(SubscriptionLevel subscriptionLevel)
    {
        // Assert. Check preconditions.
        if (subscriptionLevel == SubscriptionLevel) throw new DomainException("The new subscription level must be different to the old one");

        // Act. Do the thing.
        SubscriptionLevel = subscriptionLevel;

        // Announce. Shout that the thing was done.
        DomainEvents.Raise(new CustomerChangedSubscriptionLevelEvent(this));
    }
}

By publishing the CustomerChangedSubscriptionLevelEvent we allow downstream behaviours to be decoupled from the original action. We get several benefits from this:

  • We can make the assumption that any time we are passed a domain entity, it is in a valid state. In other words, the customer entity can say no, and refuse a change that would put it into an invalid state.
  • The Customer class doesn’t have to change at all when we add a downstream behaviour.
  • The existing tests continue to compile without change.
  • The existing tests also continue to pass without change.

It’s worth noting that the Customer class in this case is not responsible for sending an email, sending a text or informing the billing department - and nor should they be. The Customer entity’s job is to represent a customer, not orchestrate a business process.

To bolt additional downstream behaviours onto the customer’s action, we use event handlers, e.g.

public namespace WhenACustomerChangesTheirSubscriptionLevel
{
    public class SendThemAConfirmationEmail : IHandleEvent<CustomerChangedSubscriptionLevelEvent>
    {
        IEmailSender _emailSender;

        // ...

        public void Handle(CustomerChangedSubscriptionLevelEvent e)
        {
            _emailSender.SendSubscriptionChangedEmailTo(e.Customer);
        }
    }
}

This isn’t a new pattern. Udi Dahan wrote about it in 2009 in his Domain Events – Salvation post, we’ve been using it for over a decade and the Observer pattern is an original GoF pattern. At the ThoughtWorks Event-Driven Architecture Summit in 2017, a number of us started to thrash out a bit more of an opinion about what “event-driven” actually means, and the various flavours of it. Martin Fowler’s article on What do you mean by “Event-Driven”? is one of the outputs of that summit. We’re generally calling pattern that we’re using in this case event notification or event choreography.

If we want multiple behaviours, we just add mulitple handler classes; the principle being that in order to add behaviours, we just add code. We don’t want to change existing code unless the existing behaviour of the system needs to change. We do need to understand which events should be handled synchronously versus which can be handled asynchronously but there are straightforward solutions to that problem.

The pattern to remember, though, is AAA: Assert. Act. Announce.


The essence of agility

13 February 2019 06:49 AEST
Software engineering Agile

The essence of agility…

… is mastery of your craft.

I was recently reading As You Wish, Cary Elwes’ account of the making of The Princess Bride. If you haven’t read the book: it’s a good read, and well worth your time. If you haven’t seen the movie… inconceivable.

The duel between Westley (Cary Elwes) and Inigo (Mandy Patinkin) is regarded as one of the greatest movie duels of all time. It was choreographed by the legendary Bob Anderson (British 1952 Olympian and coach of the British national foil team) and Peter Diamond (Raiders of the Lost Ark, Highlander, Star Wars), and was one of the last scenes to be shot because the director was adamant that there were to be no body doubles or stunt people in the duel. This meant that Elwes and Patinkin spent many months learning to fence, and fence well.

During the actual filming of the scene, on one of the final days of shooting, the choreography wasn’t working for the camera and Anderson, encouraged by Patinkin, approached the director and suggested that they re-choreograph the sequence on the fly.

In Patinkin’s words:

We only had about twenty minutes, and we rechoreographed that whole sequence, which we had spent weeks choreographing within an inch of its life. We had learned the skill, the basics of fencing, so clearly that Cary and I, with Bob and Peter’s expert guidance, were able to redo the whole sequence up the steps in less than half an hour. That was the highlight of the whole film for me, because we had really learned a skill and we were able to implement it instantly. That was quite thrilling.

My greatest memory and pleasure, in terms of fencing, was the fact that we became proficient enough to improvise on a dime.


On the granularity of events

11 January 2019 06:58 AEST
Software engineering Domain modelling Event-driven architecture Event choreography Event streaming AAA Single Responsibility Principle

In a previous article, I wrote a little about events’ granularity but I wanted to expand on it here a little.

It’s important to understand and accept that the granularity of events will be different across each system - and that there is no One True Way™ to represent a thing that happened. An event’s job is to represent the thing that happened from the perspective of the system that performed the action. In other words, we just lay it out there and let others make of it what they will.

Our system shouldn’t speculate and shouldn’t guess who our subscribers are or what else they might want to know. That’s their job. If we adapt to our n subscribers then we have n reasons to change. If we just announce what happened from our own perspective then we only have one reason to change, i.e. our own system.

In-process events versus long-lived message contracts

NOTE: For the purposes of this article, I’m not talking about event sourcing. We’ll cover that in another article, and we structure, broker and apply our events slightly differently when we’re doing that.

Within a single codebase and a single process, lightweight events (domain events or otherwise) are an excellent way to decouple concerns whilst still ensuring transactional consistency. Events can be simple POCOs, are refactor-friendly, and are able to be changed within the space of a single commit without affecting any downstream applications.

By contrast, a long-lived event is generally written to a durable, persisted store of some description (and for some value of “persisted”) such as Kafka, Rabbit, ASB, Kinesis etc. These event types are generally intended to be consumed by other applications (unless we’re also using event sourcing and persisted events are our primary source of truth), and thus form part of our application’s API contract - and should be treated in the same way as any other part of that contract, namely being versioned and supported for agreed periods of time. We generally publish these event schemas, either via a schema definition like Apache Avro or in a pre-packaged “message DTOs” package or similar.

Reprojecting fine-grained events into coarse-grained events

In this article I’m going to use the term “domain event” to mean an event that occurred within a single domain and which is not visible in that form outside of that domain, and the term “business event” to mean an event that occurred within one domain but that is visible outside of that domain. Those definitions might blur slightly (or a lot) depending on context and perspective but they’ll serve well enough to illustrate the points I’d like to make here.

It’s entirely possible (and reasonable) to map domain events to business events and vice versa. For a concrete example, let’s say that we’re using some simple, in-process domain event brokerage. We’ll use a sample Barista class that looks something like this:

public class Barista
{
    // ...

    public void MakeCoffeeFor(Customer customer, Order order, IEspressoMachine espressoMachine)
    {
        if (!order.IsPaid) throw new OrderNotPaidException("No coffee for you!");
        if (customer.IsShouty) throw new UnrulyCustomerException("Manners maketh [wo]man.");

        var coffee = espressoMachine.PullShots(order.NumberOfShots);
        customer.Accept(coffee);

        DomainEvents.Raise(new BaristaMadeCoffeeForCustomerEvent(this, customer, coffee))
    }
}

Contrived examples notwithstanding, we model our real-world domain by following the AAA pattern and first giving our Barista the autonomy to refuse to make coffee if it hasn’t been paid for, and to refuse service to unruly customers1. Once our preconditions are satisfied, the coffee is (presumably) made, and a BaristaMadeCoffeeForCustomerEvent domain event is raised. This event is brokered in-process during the same unit of work in which the Barista makes coffee.

Once our unit of work completes, we can map our in-process domain events to business events, which can then be published externally to a Kafka topic or similar. A useful pattern is something along these lines:

public interface IMapToBusinessEvent<TDomainEvent> where TDomainEvent : IDomainEvent
{
    IEnumerable<IBusinessEvent> Map(TDomainEvent domainEvent);
}

with a sample implementation looking something like this:

public class BaristaMadeCoffeeForCustomerEventMapper : IMapToBusinessEvent<BaristaMadeCoffeeForCustomerEvent>
{
    // ...

    public IEnumerable<IBusinessEvent> Map(BaristaMadeCoffeeForCustomerEvent domainEvent)
    {
        yield return new BaristaMadeCoffeeForCustomerBusinessEvent
        {
            Timestamp = _clock.Now,
            BaristaId = domainEvent.Barista.Id,
            CustomerId = domainEvent.Customer.Id,
            Description = domainEvent.Coffee.ToString()
        };
    }
}

This pattern gives us a few options:

  1. We can use rich object references in our in-process domain events. This means that we don’t have to resolve the same entity multiple times from a repository just to deal with it in the same unit of work.
  2. We can downcast our rich object references (which won’t serialise very well) to just identifier references when we map to the business event. This also allows us to include only the information that should be externally-visible.
  3. We can choose to publish multiple business events based on a single domain event.
  4. We can choose to not publish any business event at all from a domain event. It’s entirely legitimate that there will be domain events that are purely internal to our application and which we do not want to publish to the broader app ecosystem.
  5. We can refactor our internal, in-process domain event whenever we like, as long as we project it into the same shape of externally-visible business event as before.

Reprojecting coarse-grained events into fine-grained events

It’s entirely possible (and very likely) that one system will care about events at a finer granularity than another.

Our contrived barista example

To continue our barista example, let’s look at two domains: ordering and fulfilment. Our ordering system is responsible for recording a customer’s order, taking payment and then announcing, “Order up!” via an OrderPlaced event. According to our ordering system, an order is an order is an order. Our stream might look something like this:

OrderPlaced { customerId: 1, sku: 'FLATWHITE', size: 'M', sugars: 1, milk: 'FULLFAT' }
OrderPlaced { customerId: 2, sku: 'CAPUCCINO', size: 'M', sugars: 0, milk: 'SKIM' }
OrderPlaced { customerId: 3, sku: 'MOCHA', size: 'L', sugars: 2, milk: 'FULLFAT' }
OrderPlaced { customerId: 4, sku: 'TOASTED SANDWICH', fillings: 'SURPRISE ME' }

In the physical world, a barista is likely to project an OrderPlaced event into a bit more granular a mental stream. Specifically, they’re likely to steam more than one coffee’s worth of milk and perhaps attempt to aggregate events into a few streams that might look more like this:

OrderRequiringFullFatMilkPlaced { size: 'M' }
OrderRequiringSkimMilkPlaced { size: 'M' }
OrderRequiringFullFatMilkPlaced { size: 'L' }

But why would they do this?

The reprojection of events in this case is because the barista will take a different action based on the nature of the event. If more full-fat milk is required then they’ll just add more milk to the steaming jug before starting, whereas if skim milk is required then they need to steam that separately. If the order is for a toasted sandwich then the Barista doesn’t care at all so it doesn’t even bother to reproject that OrderPlaced event. They make coffee. Sandwiches are someone else’s problem.

The key point here is that while one system (the ordering system) doesn’t draw a distinction between orders containing different kinds of milk, or even drinks versus food at all, the fulfilment system is likely to take very different actions based on some of the events’ payloads.

A real-world example

Contrived examples aside, let’s consider something more serious: a court order prohibiting a convicted criminal from changing address without first applying to a court to vary its orders.

We have a government agency responsible for knowing where a citizen lives and publishing a CitizenChangedAddressEvent to a Kafka topic whenever such an event occurs. As far as that agency is concerned, this is a legitimate action for a person to take. They’ve moved house; they tell their government2.

CitizenChangedAddressEvent: { id: '<some citizen guid>', address: '123 Imaginary Street, Nowheresville' }

The court systems subscribe to this event stream, and care about the CitizenChangedAddressEvent. When a citizen changes address, the system processes that event and performs a series of checks, then potentially projects that event into a more granular event relevant to its own domain:

namespace WhenACitizenChangesAddress
{
    public class ReprojectTheEventStream : IHandleBusEvent<CitizenChangedAddressEvent>
    {
        // ...

        public async Task Handle<CitizenChangedAddressEvent>(CitizenChangedAddressEvent e)
        {
            if ( /* citizen is convicted criminal and has moved across state boundaries */ )
            {
                // ...
            }

            if ( /* citizen is a child subject to a court order */ )
            {
                var child = _childRepository.Get(e.Id);
                DomainEvents.Raise(new ChildSubjectToCourtOrderMovedWithoutConsentEvent(child));
            }
        }
    }
}

resulting in an event stream that could look something like this:

ConvictedCriminalMovedInterstateEvent: { id: '<some citizen id>' }
PersonSubjectToDomesticViolenceOrderMovedTooCloseToVictimEvent:  { id: '<some citizen id>' }
ChildSubjectToCourtOrderMovedWithoutConsentEvent:  { id: '<some citizen id>' }

Within the court system’s domain, these events are likely to be dealt with completely differently. A convicted criminal crossing state boundaries is likely to have a warrant issued for their arrest; a person subject to a domestic violence order who has moved too close to their victim(s) may or may not have a warrant issued; a child who is named in a court order might have a travel ban imposed so that they cannot leave the country.

The court system can then implement specific handlers for these more-specific event types:

namespace WhenAChildSubjectToACourtOrderMovesWithoutConsent
{
    public class PutThemOntoAnAirportWatchList: IHandleEvent<ChildSubjectToCourtOrderMovedWithoutConsentEvent>
    {
        // ...

        public async Task Handle(ChildSubjectToCourtOrderMovedWithoutConsentEvent e)
        {
            var child = _childRepository.Get(e.Id);
            _airportWatchList.Add(child);
        }
    }
}

This seems like a lot of indirection for not much value. Why wouldn’t we just check the original event type in our handler for the CitizenChangedAddressEvent? Well, commonly we want to do more than one thing as a result of an event like this. Let’s add another behaviour:

namespace WhenAChildSubjectToACourtOrderMovesWithoutConsent
{
    public class PutThemOntoAnAirportWatchList: IHandleEvent<ChildSubjectToCourtOrderMovedWithoutConsentEvent>
    {
        // ...

        public async Task Handle(ChildSubjectToCourtOrderMovedWithoutConsentEvent e)
        {
            var child = _childRepository.Get(e.Id);
            _airportWatchList.Add(child);
        }
    }

    public class NotifyAllGuardiansOfTheChild: IHandleEvent<ChildSubjectToCourtOrderMovedWithoutConsentEvent>
    {
        // ...

        public async Task Handle(ChildSubjectToCourtOrderMovedWithoutConsentEvent e)
        {
            var child = _childRepository.Get(e.Id);
            var guardians = _guardianLookupService.FindGuardiansOf(child);
            foreach (var guardian in guardians)
            {
                _alertService.Notify(guardian, $"Child {child}'s address has changed without permission from the court.");
            }
        }
    }
}

Now it becomes obvious that if we were to include the same check for “Does this CitizenChangedAddressEvent mean that a child named in a court order has changed address without consent?” then we would start to see copious copied/pasted code to determine if this was the kind of event to which we should respond. Our code becomes much less testable, its number of dependencies explodes and we’re left with a single, gigantic orchestration method.

By contrast, if we adopt a “When X, do Y” pattern, we get a much more expressive, rule-based system that is much easier to discover. It also makes it much easier to additionally define “Also, when X happens, do Z.”

The point of this reprojection is that the publishing system neither knows nor cares how downstream systems will consume its events. It’s much tidier to reproject the coarse-grained event into finer-grained ones at the point in the ecosystem where differentiation makes a difference to the action that will be taken.

  1. “Manners maketh [wo]man.” ↩

  2. Side note: any government which has achieved a “tell us once” principle for common actions performed by citizens deserves some praise. It’s hard. ↩


Introducing Stack Mechanics

5 October 2017 00:00 AEST
Stack Mechanics Distributed systems Microservices Software engineering Continuous delivery

After living under a rock for far too long, I’m excited to introduce Stack Mechanics.

I’ve teamed up with two great friends of mine, Damian Maclennan and Nick Blumhardt to launch a series of deep-dive courses in distributed systems, microservices architectures and continuous delivery.

Don’t panic - I’m not leaving ThoughtWorks :)

The longer version: Damian, Nick and I, alongside numerous other great engineers, have spent a lot of time solving the hard problems that organisations encounter when trying to move away from monolithic, legacy systems. We’ve seen lots of places like the potential that microservices offer with respect to organisational agility and independence, but generally they’ve been completely unprepared for the inherent complexities in such systems and how to manage the trade-offs. Usually we weren’t lucky enough for it to be a green-fields project, and as a result we’ve all inherited our share of legacy, tighly-coupled systems with ugly, scary integrations, unknown black boxes, business logic hidden in integration layers (or ESBs, or stored procedures, or BPM layers), and with the many and various failure modes inherent in such ecosystems.

We arrived at a set of practices, patterns, techniques and tools that helped us solve these kinds of problems and we’ve had a lot of success at it. Unfortunately, we still see so many organisations making the same kinds of mistakes, through the best of intentions, that we first encountered many years ago. Many teams start off well but are unaware of the pitfalls; many have no idea how to even get started.

We’ve decided to get together and offer deep-dive training based on our real-world experiences. We’re kicking off in November 2017 with a three-day, hands-on workshop on .NET architecture, microservices, distributed systems and devops. There will be both theory and practical sessions, covering topics like:

  • Design Patterns for maintainable software
  • Test Driven Development
  • DevOps practices such as monitoring and instrumentation
  • Continuous delivery
  • Configuration patterns
  • REST API implementation
  • Microservices
  • Asynchronous messaging
  • Scaling and caching
  • Legacy system integration and migration

Attendees will work with us and together to build an actual ecosystem to solve a small but real-world problem.

Book your ticket for the November 2017 (Brisbane) workshop now to lock in early-bird pricing.

We’ll be scheduling other workshops in other cities (and probably some more in Brisbane), so register your interest in other dates and cities via stackmechanics.com, follow us on Twitter via @stack_mechanics and on Facebook as Stack Mechanics.


Command-line Add-BindingRedirect

4 October 2016 00:00 AEST
.NET C# NuGet Dependency management Continuous delivery Open-source software (OSS)

One of the things I try to do as part of a build pipeline is to have automatic package updates. My usual pattern is something along the lines of a CI build that runs on every commit and every night, plus a Canary build that updates all the packages to their latest respective versions.

The sequence looks something like this:

  1. The Canary build:
    1. pulls the lastest of the project from the master branch;
    2. runs nuget.exe update or equivalent;
    3. then compiles the code and runs the unit tests.
  2. If everything passes, it does (roughly) this:

     git checkout -b update-packages
     git add -A .
     git commit -m "Automatic package update"
     git push -f origin update-packages
    
     # Note: There's a bit more error-checking around non-merged branches and so on,
     # but that's fundamentally it.
    
  3. The CI build then:
    1. picks up the changes in the update-packages branch;
    2. compiles the code (yes, again), to make sure that we didn’t miss anything in the previous commit);
    3. runs the unit tests;
    4. deploys the package to a CI environment;
    5. runs the integration tests; and
    6. if all is well, merges the update-packages branch back down to master.

For what it’s worth, if a master build is green (and they pretty much all should go green if you’re building your pull requests) then out the door it goes. You do trust your test suite, don’t you? ;)

All of this can be done with stock TeamCity build steps with the exception of one thing: the call to nuget.exe update doesn’t add binding redirects and there’s no way to do that from the console. The Add-BindingRedirect PowerShell command is built into the NuGet extension to Visual Studio and there’s no way to run it from the command line.

That’s always been a bit of a nuisance and I’ve hand-rolled hacky solutions to this several times in the past so I’ve re-written a slightly nicer solution and open-sourced it. You can find the Add-BindingRedirect project on GitHub. Releases are downloadable from the Add-BindingRedirect releases page.

Pull requests are welcome :)


ConfigInjector 2.2 released

6 September 2016 00:00 AEST
.NET C# Configuration management ConfigInjector Open-source software (OSS)

ConfigInjector 2.2 is out and available via the nuget.org feed.

This release is a small tweak to allow exclusion of settings keys via expressions as well as via simple strings. Thanks to Damian Maclennan for this one :).

To exclude settings keys via exact string matches, as per before:

ConfigurationConfigurator.RegisterConfigurationSettings()
                         .FromAssemblies(ThisAssembly)
                         .RegisterWithContainer(configSetting => builder.RegisterInstance(configSetting)
                                                                        .AsSelf()
                                                                        .SingleInstance())
                         .ExcludeSettingKeys("DontCareAboutThis", "DontCareAboutThat"))
                         .DoYourThing();

To exclude settings keys via expression matches:

ConfigurationConfigurator.RegisterConfigurationSettings()
                         .FromAssemblies(ThisAssembly)
                         .RegisterWithContainer(configSetting => builder.RegisterInstance(configSetting)
                                                                        .AsSelf()
                                                                        .SingleInstance())
                         .ExcludeSettingKeys(k => k.StartsWith("DontCare"))
                         .DoYourThing();

Introducing NotDeadYet

2 April 2016 00:00 AEST
.NET C# NotDeadYet Health checks Open-source software (OSS)

NotDeadYet is a simple, lightweight library to allow you to quickly add a health-checking endpoint to your .NET application.

It has integrations for ASP.NET MVC, WebApi and Nancy.

Why do I want this?

To easily generate an endpoint that returns something like this:

{
  "Status": "Okay",
  "Results": [
    {
      "Status": "Okay",
      "Name": "ApplicationIsRunning",
      "Description": "Checks whether the application is running. If this check can run then it should pass.",
      "ElapsedTime": "00:00:00.0000506"
    },
    {
      "Status": "Okay",
      "Name": "RssFeedsHealthCheck",
      "Description": "RSS feeds are available and have non-zero items.",
      "ElapsedTime": "00:00:00.0002528"
    }
  ],
  "Message": "All okay",
  "Timestamp": "2016-04-02T03:09:03.8525313+00:00",
  "NotDeadYet": "0.0.28.0"
}

When scaling out a web applications, one of the first pieces of kit encountered is a load balancer. When deploying a new version of application we generally pull one machine out of the load-balanced pool, upgrade it and then put it back into the pool before deploying to the next one.

NotDeadYet makes it easy to give load balancers a custom endpoint to do health checks. If we monitor just the index page of our application, it’s quite likely that we’ll put the instance back into the pool before it’s properly warmed up. It would be a whole lot nicer if we had an easy way to get the load balancer to wait until, for instance:

  • We can connect to any databases we need.
  • Redis is available.
  • We’ve precompiled any Razor views we care about.
  • The CPU on the instance has stopped spiking.

NotDeadYet makes it easy to add a /healthcheck endpoint that will return a 503 until the instance is ready to go, and a 200 once all is well. This plays nicely with New Relic, Amazon’s ELB, Pingdom and most other monitoring and load balancing tools.

Awesome! How do I get it?

Getting the package:

Install-Package NotDeadYet

In your code:

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(ThisAssembly)
    .Build();

Doing a health check

var results = healthChecker.Check();
if (results.Status == HealthCheckStatus.Okay)
{
    // Hooray!
} else {
    // Boo!
}

Adding your own, custom health checks:

By default, NotDeadYet comes with a single ApplicationIsOnline health check which just confirms that the application pool is online. Adding your own (which is the point, after all) is trivial. Just add a class that implements the IHealthCheck interface and off you go.

public class NeverCouldGetTheHangOfThursdays : IHealthCheck
{
    public string Description
    {
        get { return "This app doesn't work on Thursdays."; }
    }

    public void Check()
    {
        // Example: just throw if it's a Thursday
        if (DateTimeOffset.Now.DayOfWeek == DayOfWeek.Thursday)
        {
            throw new HealthCheckFailedException("I never could get the hang of Thursdays.");
        }

        // ... otherwise we're fine.
    }

    public void Dispose()
    {
    }
}

Or a slightly more realistic example:

public class CanConnectToSqlDatabase : IHealthCheck
{
    public string Description
    {
        get { return "Our SQL Server database is available and we can run a simple query on it."; }
    }

    public void Check()
    {
        // We really should be using ConfigInjector here ;)
        var connectionString = ConfigurationManager.ConnectionStrings["MyDatabaseConnectionString"].ConnectionString;

        // Do a really simple query to confirm that the server is up and we can hit our database            
        using (var connection = new SqlConnection(connectionString))
        {
            var command = new SqlCommand("SELECT 1", connection);
            command.ExecuteScalar();
        }
    }

    public void Dispose()
    {
    }
}

There’s no need to add exception handling in your health check - if it throws, NotDeadYet will catch the exception, wrap it up nicely and report that the health check has failed.

Framework integration

Integrating with MVC

In your Package Manager Console:

Install-Package NotDeadYet.MVC4

Then, in your RouteConfig.cs:

var thisAssembly = typeof (MvcApplication).Assembly;
var notDeadYetAssembly = typeof (IHealthChecker).Assembly;

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(thisAssembly, notDeadYetAssembly)
    .Build();

routes.RegisterHealthCheck(healthChecker);

Integrating with Nancy

In your Package Manager Console:

Install-Package NotDeadYet.Nancy

Then, in your bootstrapper:

var thisAssembly = typeof (Bootstrapper).Assembly;
var notDeadYetAssembly = typeof (IHealthChecker).Assembly;

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(thisAssembly, notDeadYetAssembly)
    .Build();

container.Register(healthChecker);

FAQ

How do I query it?

Once you’ve hooked up your integration of choice (currently MVC or Nancy), just point your monitoring tool at /healthcheck.

That’s it.

If you point a browser at it you’ll observe a 200 response if all’s well and a 503 if not. This plays nicely with load balancers (yes, including Amazon’s Elastic Load Balancer) which, by default, expect a 200 response code from a monitoring endpoint before they’ll add an instance to the pool.

Does this work with X load balancer?

If your load balancer can be configured to expect a 200 response from a monitoring endpoint, then yes :)

Can I change the monitoring endpoint?

Of course. In MVC land, it looks like this:

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(typeof(MvcApplication).Assembly)
    .Build();

routes.RegisterHealthCheck(healthChecker, "/someCustomEndpoint");

and in Nancy land it looks like this:

HealthCheckNancyModule.EndpointName = "/someCustomEndpoint";

Does this work with my IoC container of choice?

NotDeadYet is designed to work both with and without an IoC container. There’s a different configuration method on the HealthCheckerBuilder class called WithHealthChecks which takes a Func<IHealthCheck[]> parameter. This is designed so that you can wire it in to your container like so:

public class HealthCheckModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        base.Load(builder);

        builder.RegisterAssemblyTypes(ThisAssembly, typeof (IHealthCheck).Assembly)
            .Where(t => t.IsAssignableTo<IHealthCheck>())
            .As<IHealthCheck>()
            .InstancePerDependency();

        builder.Register(CreateHealthChecker)
            .As<IHealthChecker>()
            .SingleInstance();
    }

    private static IHealthChecker CreateHealthChecker(IComponentContext c)
    {
        var componentContext = c.Resolve<IComponentContext>();

        return new HealthCheckerBuilder()
            .WithHealthChecks(componentContext.Resolve<IHealthCheck[]>)
            .WithLogger((ex, message) => componentContext.Resolve<ILogger>().Error(ex, message))
            .Build();
    }
}

This example is for Autofac but you can easily see how to hook it up to your container of choice.

Why don’t the health checks show stack traces when they fail?

For the same reason that we usually try to avoid showing a stack trace on an error page.

Can I log the stack traces to somewhere else, then?

You can wire in any logger you like. In this example below, we’re using Serilog:

var serilogLogger = new LoggerConfiguration()
    .WriteTo.ColoredConsole()
    .WriteTo.Seq("http://localhost:5341")
    .CreateLogger();

return new HealthCheckerBuilder()
    .WithLogger((ex, message) => serilogLogger.Error(ex, message))
    .Build();

Do the health checks have a timeout?

They do. All the health checks are run in parallel and there is a five-second timeout on all of them.

You can configure the timeout like this:

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(typeof(MvcApplication).Assembly)
    .WithTimeout(TimeSpan.FromSeconds(10))
    .Build();

What does the output from the endpoint look like?

It’s JSON and looks something like this:

{
  "Status": "Okay",
  "Results": [
    {
      "Status": "Okay",
      "Name": "ApplicationIsRunning",
      "Description": "Checks whether the application is running. If this check can run then it should pass.",
      "ElapsedTime": "00:00:00.0000006"
    },
    {
      "Status": "Okay",
      "Name": "RssFeedsHealthCheck",
      "Description": "RSS feeds are available and have non-zero items.",
      "ElapsedTime": "00:00:00.0005336"
    }
  ],
  "Message": "All okay",
  "Timestamp": "2015-11-14T11:42:35.3040908+00:00",
  "NotDeadYet": "0.0.10.0"
}

Sensitive setting support in ConfigInjector

30 March 2016 00:00 AEST
.NET C# Configuration management ConfigInjector Open-source software (OSS)

ConfigInjector 2.1 has been released with support for sensitive settings.

This is a pretty simple feature: if you have a sensitive setting and want to be cautious about logging it or otherwise writing it to an insecure location, you can now flag it as IsSensitive and optionally override the SanitizedValue property.

If you just want to mark a setting as sensitive, just override the IsSensitive property to return true. This will allow you to make your own judgements in your own code as to how you should deal with that setting. You can, of course, still choose to log it - it’s just an advisory property.

If you want to be a bit more serious, you can also override the SanitizedValue property to return a sanitized version of the value. By default, if you’re logging settings to anywhere you should log the SanitizedValue property rather than just the Value one.

public class FooApiKey: ConfigurationSetting<string>
{
    public override bool IsSensitive => true;

    public override string SanitizedValue => "********";
}

It’s worth noting that these properties do not change the behaviour of ConfigInjector; they simply allow us to be a bit more judicious when we’re dealing with these settings.


Talk video: Back to basics: simple, elegant, beautiful code

25 March 2016 00:00 AEST
.NET C# Software engineering Conference Community

This is the video of the talk I gave at DDD Brisbane 2015.

As a consultant I see so many companies using the latest and greatest buzzwords, forking out staggering amounts of cash for hardware and tooling and generally throwing anything they can at the wall to see what sticks. The problem? Their teams still struggle to produce high-quality output and are often incurring unsustainable technical debt. Codebases are still impossible to navigate and there’s always that underlying dread that one day soon someone is going to discover what a mess everything is.

How can this happen? It wasn’t supposed to be this hard! Don’t we all know all this stuff by now?

Let’s take a look at some patterns and practices to reduce the cognitive load of navigating a codebase, maintaining existing features and adding new ones, and all while shipping high-quality products. Fast.

Back to basics: simple, elegant, beautiful code


What's new in ConfigInjector 2.0

14 December 2015 00:00 AEST
.NET C# Configuration management ConfigInjector Open-source software (OSS)

ConfigInjector 2.0 has hit the NuGet feed.

What’s new? A handful of things:

  • Support for overriding settings via environment variables (useful for regression suites on build servers).
  • Support for loading settings from existing objects.
  • Logging hooks to allow callers to record where settings were loaded from and what their values were.

There are some breaking changes with respect to namespaces so it’s a major version bump but unless you’re doing anything really custom it should still be a straight-forward upgrade.


Building Tweets from the Vault: Twitter OAuth

1 February 2015 00:00 AEST
Tweets from the Vault C# .NET

In Building Tweets from the Vault: NancyFX tips and tricks we took a look at some of the refactoring-friendly approaches we can take when building a NancyFX application.

In this post we’ll see a very simple example of how Tweets from the Vault uses Twitter and Tweetinvi, a nice-to-call .NET library for Twitter. Tweetinvi has a whole lot more features than authentication but authentication in general is the main focus of this post, so here goes.

I’ll state in advance that this advice only briefly touches upon some really elementary application security. I’ll write a bit more about that in subsequent posts but please don’t treat this advice as anything other than a few brief pointers on the most basic of things. Do your homework. This stuff is important.

Requiring authentication in NancyFX

To begin with, our Nancy application has a base module class from which almost all others derive:

public abstract class AuthenticatedModule : RoutedModule
{
    protected AuthenticatedModule()
    {
        this.RequiresAuthentication();
    }
}

Our AuthenticatedModule just demands authentication and leaves everything else to its derived classes. It’s worth noting that there’s also a convention test (which we’ll discuss in another post) that asserts that every single module in the app must explicitly derive from either AuthenticatedModule or UnauthenticatedModule so as to leave no room for “Oh, I forgot to set the security on that one.”

In Tweets from the Vault, we’re using NancyFX’s StatelessAuthentication hook. We actually add an item to the request pipeline to check for 401 responses and send a redirect. In this way, our individual modules can just demand an authenticated user and return a 401 if not. It’s up to the rest of our pipeline to figure out that we should probably present a kinder response.

In our bootstrapper:

protected override void ApplicationStartup(ILifetimeScope container, IPipelines pipelines)
{
    ConfigureLogging(container);
    using (Log.Logger.BeginTimedOperation("Application starting"))
    {
        // A bunch of irrelevant stuff elided here
        ConfigureAuth(pipelines);
    }
}

private static void ConfigureAuth(IPipelines pipelines)
{
    // Yes, we're using the container as a service locator here. We're resolving
    // a .SingleInstance component once in a bootstrapper so I'm okay with that.
    var authenticator = IoC.Container.Resolve<Authenticator>();

    StatelessAuthentication.Enable(pipelines,
                                   new StatelessAuthenticationConfiguration(authenticator.Authenticate));

    pipelines.AfterRequest.AddItemToEndOfPipeline(ctx =>
    {
        if (ctx.Response.StatusCode == HttpStatusCode.Unauthorized)
        {
            var response = new RedirectResponse(Route.For<SignIn>());
            ctx.Response = response;
        }
    });
}

Let’s have a look at what this is doing. We can see that the item is being added to the end of the pipeline. That means that it will be executed after our module has done its thing and returned. If the module does an early exit and returns a 401, that will be observable in ctx.Response.StatusCode and we’ll mess with it; otherwise we’ll just pass the response straight through.

If we’ve observed a 401, we clobber the 401 response with a 302 and bounce the user back to the SignIn page using the Route.For expression that we looked at in Building Tweets from the Vault: NancyFX tips and tricks. It’s noteworthy that the browser will never see a 401; just a 302.

What about Twitter and OAuth?

The assumption I’m making here is that you’ll actually want to do something on behalf of a user using the Twitter API. That’s pretty obvious as it’s what Tweets from the Vault does, but I’m going to state up-front: if all you want is an identity via OAuth, this is a harder way to do it than you need. If you want API access, however, then read on.

The first thing you’ll need is an application on Twitter. Go to apps.twitter.com to create one.

The next thing you’ll need is an x.509 certificate. You’ll be telling Twitter to pass keys to access other people’s accounts via GET parameters, so don’t be sending those around the place in plaintext. Incidentally, Twitter does support localhost as a valid redirect URL target, so you’ll be fine for your own testing. Just make sure that you never present a sign-in/sign-up page other than via HTTPS, and likewise make sure your callback URL is HTTPS as well.

You’ll also want the Tweetinvi package:

Install-Package Tweetinvi

Once we’ve hit our SignIn page, it creates a Twitter sign-in credentials bundle using Tweetinvi. This isn’t exactly the code in Tweets from the Vault as there are a few abstractions here and there - I’ve inlined a few things - but it’s pretty close. In our SignIn module:

// You'll want to stash these somehow as there's a single-use token in there
// that you'll need to decode the response.
var temporaryCredentials = TwitterCredentials.CreateCredentials(userAccessToken,
                                                                userAccessSecret,
                                                                _twitterConsumerKey,
                                                                _twitterConsumerSecret);
var authenticationUrl = CredentialsCreator.GetAuthorizationURLForCallback(temporaryCredentials, redirectUrl);

// Hack because the Tweetinvi library doesn't seem to support just authentication - it wants to make an
// authorize call all the time. This will happen anyway on the first time someone uses your app but
// forever after an authenticate call will just bounce straight back whereas an authorize call will
// continue to prompt.
authenticationUrl = authenticationUrl.Replace("oauth/authorize", "oauth/authenticate");

We redirect the user to that authenticationUrl, which will be somewhere on twitter.com, and Twitter will present them with an “Authorize this App” page.

Then, in our SignInCallback module:

var temporaryCredentials = /* fetch these from wherever you stashed them */
var userCredentials = CredentialsCreator.GetCredentialsFromCallbackURL(callbackUrl, temporaryCredentials);
var twitterUser = User.GetLoggedUser(userCredentials);

At this point, we have a valid Twitter user who’s been verified for us by Twitter (thank you :) ) We’ll also have a set of keys to allow us to make API calls as that user to the extent permitted by the privileges that your app was granted by the user.

Of course, if the user declines to authorise the Twitter app to use their account then you’ll get back a different response. Be sure to handle that.

Now what?

Now we have a user who’s just presented us with a valid set of callback tokens from Twitter via a redirect URL. That’s nice, but we shouldn’t be leaving those lying around. What we should be doing is generating our own authentication token of some sort and sending that back as a cookie. (Remember to give people some way to destroy that cookie once they leave a machine, too - you need a “Sign out” button[1]!)

A good way to do this is using a JSON Web Token or similar. There are a bunch of libraries (and opinions) out there on The One True Way™ to do it but the general principle is roughly the same as HTTP cookies: you shove a bunch of claims into a JSON object, sign it and give it to the browser. When it makes a request it can supply that via a cookie.

The JWT standard doesn’t specify encryption - it’s about sending information in plaintext but making it verifiable. That said, if you don’t have to inter-operate with anyone else (i.e. you’re just doing your own sign-on, not implementing SSO across a group of sites) then go ahead and encrypt it. It will help prevent other people stickybeaking into what you’ve bundled into there but still let you use someone else’s library code rather than hand-rolling your own. It should go without saying[2] that if you’re going to put any sensitive information into it then 1) have a careful think about whether you actually need to do that, and 2) make sure you’re using a reputable encryption algorithm with a decent-length key.

Using this approach you can put pretty much anything into your token. As a general rule, I’d like to be able to load a page and only hit persistent storage for data specific to that page. Loading a user’s name, profile picture URL or anything else that is part of the ambient experience goes into the encrypted token. This means that I can render most pages without hitting a dbo.Users or similar. The token doesn’t need to be readable by anyone else but it does need to be relatively small as it’s going to be transmitted by the browser on every request. Also think about what you’ll do in the case of wanting to disable a user account - if you’re not checking dbo.Users every request then how will you know to return a 403?

Be sensible. Don’t create another ViewState. Don’t treat it like session state[3].

So we’re done?

Not quite. You’ll probably want to create your own representation of a user once you have a confirmed Twitter identifier. I’d also use the Twitter 64-bit int as your foreign key, not the username, as that may well change.

It’s worth bearing in mind that Twitter’s OAuth solution does not provide users’ email addresses so that’s something you’ll have to either request for yourself or live without. That’s up to you :) Likewise, we’re relying on Twitter’s anti-spam measures to prevent malicious sign-ups. That’s not unreasonable in the first instance but don’t expect it to be perfect.

In the next post in this series, we’ll take a look at some interesting domain event modelling as part of implementing payments using Stripe.

[1] And it should generate a POST request, not a GET one, but that’s a story for another day.

[2] Hence, of course, needing to say it…

[3] An evil to be discussed some other time…


Building Tweets from the Vault: NancyFX tips and tricks

19 January 2015 00:00 AEST
Tweets from the Vault C# .NET

In Building Tweets from the Vault: Azure, TeamCity and Octopus, I wrote about the hosting and infrastructure choices I made for Tweets from the Vault. This article will cover a bit more about the framework choices, notably NancyFX.

NancyFX

NancyFX may sound like a bit more of an esoteric choice, especially to the Microsoft-or-die crowd. I’ve been having a pretty fair amount of success with Nancy, however. I love the way that it just gets out of the road and provides a close-to-the-metal experience for the common case but makes it simple to extend behaviour.

By all means, it’s not perfect - I’m not a huge fan of “dynamic, dynamic everywhere” - but it’s way better than MVC for my needs. The upgrade path is a whole lot less troublesome, too - the best advice I’ve found for upgrading between major versions of MVC is to create a new project and copy the old content across.

Application structure

The equivalent of an MVC controller in NancyFX is the module. In a typical MVC controller, there are lots (usually far too many) methods (controller actions) that do different things. While this isn’t strictly a feature of the framework, all the sample code tends to guide people down the path of having lots of methods on an average controller, with a correspondingly large number of dependencies.

In MVC, routing to controller actions is taken care of my convention, defaulting to the controller’s type name and method name. For instance, the /Home/About path would (by default) map to the About() method on the HomeController class.

Nancy routes are wired up a little bit differently. Each module gets to register the routes that it can handle in its constructors, so if I were to want to emulate the above behaviour I’d do something like this:

public class HomeModule : NancyModule
{
    public HomeModule()
    {
        Get["/Home/Index"] = args => /* some implementation here */;
        Get["/Home/About"] = args => /* some implementation here */;
        Get["/Home/Contact"] = args => /* some implementation here */;
    }
}

Obviously, if we want the same Nancy module to handle more than one route then we just wire up additional routes in the module’s constructor and we’re good.

This is nice in a way but it’s also a very easy way to cut yourself and I tend to not be a fan. Not only that, but it still leads us down the path of violating the Single Responsibility Principle in our module.

My preference is to have one action per module and to name and namespace each module according to its route. Thus my application’s filesystem structure would look something like this:

app
    Home
        Index.cs
        About.cs
        Contact.cs

This makes it incredibly easy to navigate around the application and I never have to wonder about which controller/module/HTTP handler is serving a request for a particular path.

My About.cs file would therefore look something like this (for now):

public class About : NancyModule
{
    public About()
    {
        Get["/Home/About"] = args => /* some implementation here */;
    }
}

RoutedModule

One problem with the above approach is that it’s not refactoring-friendly. If I were to change the name of the About class then I’d also need to edit the route registration’s magic string. Magic strings are bad, mmmkay?

A simple approach for the common case (remembering that it’s still easy to manually register additional routes) is to just derive the name of the route from the name and namespace of the module. (Hey, I didn’t say that all of MVC was bad.)

public abstract class RoutedModule : NancyModule
{
    protected RoutedModule()
    {
        var route = Route.For(GetType());
        Get[route, true] = (args, ct) => HandleGet(args, ct);
        Post[route, true] = (args, ct) => HandlePost(args, ct);
    }

    protected virtual async Task<dynamic> HandleGet(dynamic args, CancellationToken ct)
    {
        return (dynamic) View[ViewName];
    }

    protected virtual Task<dynamic> HandlePost(dynamic args, CancellationToken ct)
    {
        throw new NotSupportedException();
    }

    protected virtual string ViewName
    {
        get { return this.ViewName(); }
    }
}

This now allows for our About.cs file to look like this:

public class About : RoutedModule
{
}

Routes

We’re not quite there yet. I’m not a fan of magic strings and in the above example you can see a call to a static Route.For method. That method is where the useful behaviour is, and it looks like this:

public static class Route
{
    private static readonly string _baseNamespace = typeof (Index).Namespace;

    public static string For<TModule>() where TModule : RoutedModule
    {
        return For(typeof (TModule));
    }

    public static string For(Type moduleType)
    {
        var route = moduleType.FullName
                              .Replace(_baseNamespace, string.Empty)
                              .Replace(".", "/")
                              .Replace("//", "/")
                              .ToLowerInvariant();
        return route;
    }

    public static string ViewName(this RoutedModule module)
    {
        // Left as an exercise for the reader :)
    }
}

This allows us to have a completely refactor-friendly route to an individual action. There are a couple of similar routing efforts for MVC, notably in MVC.Contrib and MvcNavigationHelpers, but this lightweight approach doesn’t require building and parsing of expression trees. (It’s worth noting that it doesn’t account for a full route value dictionary, either, but you can add that if you like.)

In our views, our URLs can now be generated like this:

<a class="navbar-brand" href="@(Route.For<Index>())">
    Tweets from the Vault
</a>

and in our modules, like this:

return new RedirectResponse(Route.For<Index>());

A quick ^R^R (Refactor, Rename, for all you ReSharper Luddites) of any of our modules and you can see that we haven’t broken any of our links or redirects.

In the next post in this series, we’ll take a quick look at authenticating with Twitter using OAuth.


Building Tweets from the Vault: yet another Bootstrap site?

17 January 2015 00:00 AEST
Tweets from the Vault C# .NET

There’s even a Tumblr for this.

The reality, however, is that Bootstrap is incredibly popular for good reason. It’s responsive out-of-the-box, is delivered via other people’s CDNs (for which I thank you :) ) and provides a relatively familiar UI paradigm.

In keeping with the “minimum viable product” theme, Bootstrap allows for very quick… err… bootstrapping of a pleasant, clean, simple web application with the minimum of fuss.

This is just a quick post to get the “Yet another Bootstrap site?” question out of the way. In the next post in this series I’ll look in a bit more detail at NancyFX and some sneaky tricks to make it refactoring-friendly.


Building Tweets from the Vault: Azure, TeamCity and Octopus

15 January 2015 00:00 AEST
Tweets from the Vault C# .NET

In the previous post in this series, Building Tweets from the Vault: Minimum Viable Product, I wrote about the absolute minimum feature set to get Tweets from the Vault off the ground.

In this post I’m going to write a bit more about some of the technology choices. So… what were they, and why?

  • Azure
  • TeamCity
  • Octopus Deploy
  • NancyFX
  • Bootstrap
  • Tweetinvi
  • Stripe + Stripe.NET
  • Serilog + Seq

Azure + TeamCity + Octopus Deploy

Obviously, I was going to need a hosting platform that was easy to get started with but that could scale if (when? ha!) my app hits the big-time. The app (and the article you’re currently reading) is running on Azure VM-hosted IIS deployed to via Octopus Deploy.

To ship a feature

git add -A .
git commit -m "Added some feature"
git push

TeamCity will pick up that change, build it, run my tests, push a package to Octopus Deploy and then deploy that package to my test environment. A quick sanity-check there[1] and then it gets promoted to the live Azure environment. Using this tool suite a change can go from my MacBook to production via a sensible build + test + deploy pipeline in under two minutes.

For anything more complicated, I’ll use a feature branch. TeamCity will automatically pick up and build refs/heads/* so all of my branches get the same treatment, all the way through to packaging in Octopus and deploying to a test site.

Hotfixes are treated in the same way as feature branches. If I have to revert to any particular revision, it’s simple:

git checkout some-hash
git checkout -b hotfix-some-fix
git add -A .
git commit -m "Fixed some bug"
git push

That build will go straight to my test environment through the normal build + test + deploy pipeline and I can then tell Octopus to promote that hotfix package to production. No mess; no fuss.

In the next posts in this series, I’ll write a bit about Bootstrap and NancyFX.

[1] I trust my test suite. You should trust yours, too - or write better ones ;)


Building Tweets from the Vault: Minimum Viable Product

13 January 2015 00:00 AEST
Tweets from the Vault C# .NET

Tweets from the Vault is a service that will take a random[1] historical item from your RSS feeds and tweet a link to it.

When I started building the service, my goals were simple:

  • Solve my own problem
  • Get to a minimum viable product as quickly as possible

In this series of posts I’m going to look at each of those points in a little more detail.

Solve my own problem

There’s a bunch of content in my blog that is still very relevant today. Lots of stuff on agile; lots on software principles and some valuable odds and ends that were starting to be lost to the archives.

I have IFTTT tweeting content as soon as it hits my RSS feeds, which is great, and obviously that leads to people’s visiting those posts while that tweet appears in their timeline. Once that tweet falls off their timeline, though, all bets are off - nobody’s likely to see that tweet ever again.

I wanted a solution that would periodically fish out a historical but relevant article and tweet a link to it and there wasn’t a service that did that for me in a way that I liked. The closest I could find was an outdated Wordpress plugin (and I don’t use Wordpress). Well… I blog mostly about software so why wouldn’t I write one for myself? And, if I were to write one for myself, perhaps I could tweak it a bit and make it useful to other people. And thus, Tweets from the Vault was born.

Mimimum viable product

The minimum viable product for me was pretty simple:

  • Sign in using a Twitter account
  • Set up a small, recurring payment
  • Add and remove RSS feeds
  • On a schedule:

    • Pick a random article from that set of RSS feeds
    • Tweet it

And that’s pretty much it.

In the next post in this series, I’ll start looking at some of the technology choices for the app.

[1] For a given definition of “random”.


Tweets from the Vault

7 January 2015 00:00 AEST
Tweets from the Vault

I built a thing!

I keep linking people to old blog posts of mine. Sometimes it’s to solve a problem that was solved a long time ago; other times it’s to make a point that the argument they’re having isn’t new. Either way, there’s a whole bunch of valuable content locked up with nothing but an “Archives” link on a web site to show that it ever existed.

I went looking for a way to re-publish some of this content and couldn’t find anything that did what I wanted. Thus, Tweets from the Vault was born.

Tweets from the Vault is a paid service that will pick a random item out of any set of RSS feeds you give it and tweet it from your account.

Landing Page Screenshot

Dashboard Screenshot

I’ve priced the lowest plan of one tweet per day at $1/month. That’s less than the average person would spend on electricity to power their laptop for the month, and way less than you’d spend on even a half-decent coffee.

You’ll be seeing it in my Twitter feed - and I hope I’ll see it in yours :)


Am I interviewing you? Here's what I'm going to ask.

20 March 2014 00:00 AEST
Career

If you have an interview scheduled with me, here’s what I’m going to ask you.

This is a tongue-in-cheek guide to interviewing with anyone. There’ll be some fun poked, some snark and some genuine advice. I’ll leave it to you to decide which is which :)

So… Hi! Firstly, if you’ve arrived here because you’re doing your homework, good for you. Have a point. Have two, even. I’m feeling generous today.

I do a lot of interviews. Depressingly, even with all the advice available to them, people still fall down on the same things time after time after time. If you’re diligent enough to be doing your homework by reading this, you should be fine :) If you’re reading this after your interview with a sinking feeling in your stomach… well… this homework was probably due yesterday. Sorry.

Be on time.

If we’re interviewing in person, be on time. I’ll be there. If we’re interviewing via Skype, I will add you as a contact at precisely our scheduled time. Expect this. Be online. Failing at time zone calculations for any position in the software industry does not bode well.

If you’re horrified that I even have to say this then have another point :)

Ask me questions.

With respect to consulting, I want you to treat me as you’d treat a client. The questions you ask in advance and during the interview will help both of us. It will help you understand if you’re answering my questions well, and it will help me understand that you know what questions to ask.

Ask me about stuff you’re curious about. Ask me about pretty much anything. Just show that you can hold a conversation and elicit useful knowledge about a topic at the same time.

Ask me about anything that helps you decide whether we’re a good cultural fit. I’ll be asking you similar questions and fair’s fair.

I’m hiring colleagues, not minions. I want to like you.

I’ll be hoping that you’d like to work with me. Your mission is to make me want to work with you. I’m looking for people who are interesting, engaging and fun to hang out with. I want people on my teams who other people will want to work with. We’ll probably be spending a fair bit of time together and I’d like that time to be enjoyable for all parties.

I’m going to ask you about what you’re strongest in.

There’s no value in finding weaknesses in things you’ve already told me you’re not great at. That’s fine. If you say you hate Windows Forms then why would I ask esoteric questions about it? That serves neither of us well. (Besides, I don’t like WinForms, either.) I’m going to play to your strengths. If your strengths are strong, good for you. If your strengths are weak then I don’t need to dig much further.

If you tell me you rate yourself as a thought leader in a space, I expect you to be able to teach it to me from first principles because that’s what your clients will be paying four figures per day for. If you tell me, for instance, that you’re a thought leader on an open-source framework, I’ll assume you’re a committer to it and ask you what you last pushed.

If you’re good at something, say so. It’s your opportunity to show knowledge and enthusiasm. If you’re not good at something, say so and we’ll move on. That’s perfectly okay and I won’t hold it against you. Don’t bluff. I’ll call.

I’m going to ask about what you’re interested in, not what I’m interested in.

I want you to get enthusiastic about something. Teach me something. Make me enthusiastic about/interested in something. Creating enthusiasm and engagement in other people is a life skill, not just a consulting one. I’m hiring for that skill.

I will expect you to know your fundamentals.

In the .NET space, this means that I’m going to ask you about the CLR, stack, heap, memory allocation, garbage collection, generics and all the other stuff that you use day in and day out.

In the agile space I’m going to ask you for opinions about Scrum, Kanban, lean and so on. You’re going to need to discuss these, not just parrot the definition of a user story.

We’ll cover lots of other topics but not knowing your fundamentals is a cardinal sin. It’s akin to stealing from your client and it’s… not a behaviour I’d encourage. They’re called fundamentals for a reason :)

If you’ll be paid to write code then, yes, I will expect you to write code.

You’re probably interviewing for some kind of software engineering position. Be prepared to demonstrate that you can walk the walk.

Final notes.

People generally take this kind of advice in one of two ways:

  1. They’re offended because some of it applies to them.
  2. They’re horrified that it would apply to anybody.

If you’re the latter then we’ll probably have war stories to share, heaps to chat about and I’ll be looking forward to meeting you. If you’re the former then even if you’re offended by what I’ve written I hope it’s constructive in one way or another for you. Have fun storming the castle!


Brisbane Azure User Group talk on Azure Service Bus Made Easy

18 March 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Community

Damian Maclennan and I did a talk at the Brisbane Azure User Group on Azure Service Bus Made Easy. Here’s the video :)

Azure Service Bus Made Easy

And here are Damian’s slides from the night:

Azure Service Bus Made Easy


Support for long-running handlers in Nimbus

13 March 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Open-source software (OSS)

Shiny, new feature: Nimbus now allows command/event/request handlers the option to run for extended time periods.

From day zero Nimbus has supported competing command handlers, allowing us to spin up an arbitrary number of handlers to increase throughput. One issue that we’ve run into is to do with the way that we have to deal with reliable handling of messages and how and when retries are attempted.

You’d think (naively) that a normal workflow would looks something like this:

  1. Pop a command from the queue.
  2. Handle that command.

But what happens when the command handler goes bang? We need some way of putting that command back onto the queue for someone else to attempt. Again, a naive approach would be something like this:

  1. Pop a command from the queue.
  2. Handle that command.
  3. If that command goes bang, put it back onto the queue.

So… where does the command live during Step #2? The only place for it to live is on the node that’s actually doing the work - and this is a problem. If that node simply throws an exception then we could catch it and put the message back onto the queue. But what if the power goes out? Or a disk goes crunch? (Or crackle, given that we’re in SSD-land now?) What if that node never comes back?

If that node never comes back, the message never gets re-enqueued, which means we’ve violated our delivery guarantee. Oops.

Thankfully, that’s not how it works. Under the covers, the Azure Message Bus does some clever stuff for us. The actual workflow looks something like this:

  1. Tentatively pop a message from the head of the queue.
  2. Attempt to handle that message.
  3. If we succeed, call BrokeredMessage.Complete()
  4. If we fail, call BrokeredMessage.Abandon()

The missing piece in this puzzle is still what happens if the power goes out. In this case, the Azure Message Bus will automatically re-queue the message after a certain time period (called the peek-lock timeout) and won’t allow the original (now-timed-out) handler to call either .Complete() or .Abandon() on the message any more. In essence, it’s saying “You get XX seconds to handle the message and if I don’t hear back from you one way or the other before that time elapses then I’ll assume you’ve vanished and will give someone else a chance to handle it.”

So what’s the problem, then?

The problem arises when we have a command handler that legitimately takes longer than the peek-lock timeout to do its thing. We’ve seen this scenario in the wild with people doing things like Selenium-based screen-scraping of legacy web sites, really long-running aggregate queries or ETL operations on databases and a bunch of other scenarios.

Let’s have a look at our PizzaMaker as an example. Here’s our IncomingOrderHandler class:

public class IncomingOrderHandler : IHandleCommand<OrderPizzaCommand>
{
    private readonly IPizzaMaker _pizzaMaker;

    public IncomingOrderHandler(IPizzaMaker pizzaMaker)
    {
        _pizzaMaker = pizzaMaker;
    }

    public async Task Handle(OrderPizzaCommand busCommand)
    {
        await _pizzaMaker.MakePizzaForCustomer(busCommand.CustomerName);
    }
}

and our PizzaMaker looks something like this:

public class PizzaMaker : IPizzaMaker
{
    private readonly IBus _bus;

    public PizzaMaker(IBus bus)
    {
        _bus = bus;
    }

    public async Task MakePizzaForCustomer(string customerName)
    {
        await _bus.Publish(new NewOrderRecieved {CustomerName = customerName});
        Console.WriteLine("Hi {0}! I'm making your pizza now!", customerName);

        await Task.Delay(TimeSpan.FromSeconds(45));

        await _bus.Publish(new PizzaIsReady {CustomerName = customerName});
        Console.WriteLine("Hey, {0}! Your pizza's ready!", customerName);
    }
}

Let’s say that the peek-lock timeout is set to 30 seconds and making a pizza takes 45 seconds. What will happen in this case is that the first handler will be spun up and given the command instance to handle. It will start to do its thing and all is well and good. Thirty seconds later, the bus decides that that handler has died so it revokes its lock, puts the message back at the head of the queue and promptly gives it to someone else.

After another 15 seconds, the first handler will finish (presumably successfully) and will attempt to call .Complete() on its message, which will make it throw an exception as it no longer holds a lock. What’s worse is that this will repeat until the maximum number of delivery attemps has been exceeded.

We’ve just made five pizzas for the one order. And none of them has been recorded as successful. Oops.

What do I have to do to make it all Just Work™?

All you need to do is implement the ILongRunningHandler interface on your handler class. Let’s update our IncomingOrderHandler example from earlier:

public class IncomingOrderHandler : IHandleCommand<OrderPizzaCommand>, ILongRunningHandler    // Note the additional interface
{
    private readonly IPizzaMaker _pizzaMaker;

    public IncomingOrderHandler(IPizzaMaker pizzaMaker)
    {
        _pizzaMaker = pizzaMaker;
    }

    public async Task Handle(OrderPizzaCommand busCommand)
    {
        await _pizzaMaker.MakePizzaForCustomer(busCommand.CustomerName);
    }

    // Note the new method
    public bool IsAlive
    {
        get { return true; }
    }
}

The ILongRunningHandler interface has a single, read-only property on it: IsAlive. All you need to do is return true if your handler is still happily executing or false if it’s not. In this case, we’ve taken the very naive approach of just returning true but it might make more sense, for instance, to ask our PizzaMaker instance if they still have an order for the customer in the works.

Under the covers, Nimbus will automatically renew the lock it’s taken out on the message for you so that you can take as long as you like to handle it.


The Readify Firehose: An aggregated feed of a bunch of random Readifarians

12 March 2014 00:00 AEST
Community

In true Readify style, we had the idea for a thing the other day and launched it the next morning. It’s a small thing but still a thing.

Well, we built it, launched it, got a bunch of people to agree that it was a good idea, populated it and publicised it. That was Thursday night/Friday morning.

The Firehose

The Readify Firehose is a simple aggregated RSS feed of a bunch of participating Readify consultants and other bloggers. It’s entirely opt-in so you may or may not find your favourite author there (yet) but we hope it’s useful.

You can have a look around at readify-firehose.azurewebsites.net or subscribe directly to the feed.


Your domain model is too big for RAM

11 March 2014 00:00 AEST
Conference Community Event sourcing Event-driven architecture

Here’s the video from my DDD Brisbane 2013 talk.

Your domain model is too big for RAM (and other fallacies)


Stopping a Visual Studio build on first error

8 March 2014 00:00 AEST
Visual Studio

I can’t believe I’ve lived without this for so long.

Einar Egilsson has written a wonderfully-useful little plugin for Visual Studio that will allow you to stop the entire build process as soon as there’s a single project that fails.

This helps when you have a solution with tens (Hundreds? Please don’t do that.) of projects in them and there’s a compilation failure in one of the core projects upon which most of the others depend. You know that the build’s going to fail but often it’s just too much hassle to stop it manually - especially if you’re on a keyboard that doesn’t have a Break key.

The plugin has been around for a few years now and I can’t believe I’ve never searched for something like it before today.

You can get the plugin from the Visual Studio Gallery.


ConfigInjector now supports static loading of settings

6 March 2014 00:00 AEST
.NET C# Configuration management ConfigInjector Open-source software (OSS)

ConfigInjector 1.1 has just been released.

It now supports static loading of individual settings so you can grab settings directly from your app/web.config files without using the dreaded magic strings of ConfigurationManager.AppSettings.

This is a necessary feature but please give some serious thought to whether it’s a good idea in your particular case. If you genuinely need access to settings before your container is wired up, go ahead. If you’re using ConfigInjector as a settings service locator across your entire app, you’re holding it wrong :)

Here’s how:

var setting = DefaultSettingsReader.Get<SimpleIntSetting>();

ConfigInjector will make an intelligent guess at defaults. It will, for instance, walk the call stack that invoked it and look for assemblies that contain settings and value parsers. If you have custom value parsers it will pick those up, too, provided that they’re not off in a satellite assembly somewhere.

If you need to globally change the default behaviour, create a class that implements IStaticSettingReaderStrategy:

public class MyCustomSettingsReaderStrategy : IStaticSettingReaderStrategy
{
    // ...
}

and use use this to wire it up:

DefaultSettingsReader.SetStrategy(new MyCustomSettingsReaderStrategy());

If you’re using ConfigInjector and like it, please let me know. There’s Disqus below and there’s always Twitter :)


Request and response with Nimbus

5 March 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Open-source software (OSS)

In this article we’re going to have a look at the request/response patterns available in Nimbus.

We’ve already seen Command handling with Nimbus and Eventing with Nimbus about command and event patterns respectively; now it’s time to take a look at the last key messaging pattern you’ll need to understand: request/response.

To get this out of the way right up-front, let’s be blunt: request/response via a service bus is the subject of religious wars. There are people who argue adamantly that you simply shouldn’t do it (possibly because their tools of choice don’t support it very well ;) ) and there are others who are in the camp of “do it but use it judiciously”. I’m in the latter. Sometimes my app needs to ask someone a question and wait for a response before continuing. Get over it.

Anyway, down to business.

The first item on our list is a simple request/response. In other words, we ask a question and we wait for an answer. One key principle here is that requests should not change the state of your domain. In other words, requests are a question, not an instruction; a query, not a command. There are some exceptions to this rule but if you’re well-enough versed in messaging patterns to identify these (usually but not exclusively the try/do pattern) then this primer really isn’t for you.

Let’s take another look at our inspirational text messaging app. If you’re not familiar with it, now would be a good time to have a quick flick back to the previous two posts in the series. Go ahead. I’ll wait :)

So a customer has just signed up for our inspirational text message service and we’re in the process of taking a payment. Our initial payment processing code might look something like this:

public async Task BillCustomer(Guid customerId, Money amount)
{
    await _bus.Send(new BillCustomerCommand(customerId, amount));
}

and our handler code might look something like this:

public class BillCustomerCommandHandler: IHandleCommand<BillCustomerCommand>
{

    ...

    [PCIAuditable]
    public async Task Handle(BillCustomerCommand busCommand)
    {
        var customerId = busCommand.CustomerId;
        var amount = busCommand.Amount;

        var creditCardDetails = _secureVault.ExtractSecuredCreditCardDetails(customerId);

        var fraudCheckResponse = await _bus.Request(new FraudCheckRequest(creditCardDetails, amount));

        if (fraudCheckResponse.IsFraudulent)
        {
            await _bus.Publish(new FraudulentTransactionAttemptEvent(customerId, amount));
        }
        else
        {
            _cardGateway.ProcessPayment(creditCardDetails, amount);
            await _bus.Publish(new TransactionCompletedEvent(customerId, amount));
        }
    }
}

So what’s going on here? We can see that our handler plucks card details from some kind of secure vault (this isn’t a PCI-compliance tutorial but nonetheless please, please, please don’t pass credit card numbers around in the clear) and performs a fraud check on the potential transaction. The fraud check could involve the number of times we’ve seen that credit card number in the past few minutes, the number of different names we’ve seen associated with the card, the variation in amounts… the list is endless. Let’s assume for the sake of this scenario that we have a great little service that just gives us a boolean IsFraudulent response and we can act on that.

Scenario #1: Single fraud-checking service

Single fraud-checking service

In this scenario we have our app server talking to our fraud-checking service. We’ll ignore our web server for now. It still exists but doesn’t play a part in this scenario.

This is actually pretty straight-forward: we have one app server (or many; it doesn’t matter) asking questions and one fraud-checking service responding. But, as per usual, business is booming and we need to scale up in a hurry.

Scenario #2: Multiple fraud-checking services

Multiple fraud-checking services

We’ve already done pretty much everything we need to do to scale this out. Our code doesn’t need to change; our requestor doesn’t need to know that its requests are being handled by more than one responder and our responders don’t need to know of each others’ existence. Just add more fraud checkers and we’re all good.

Only one instance of a fraud checker will receive a copy of each request so, as per our command pattern, we get load-balancing for free.

Scenario #3: Multicast request/response (a.k.a. Black-balling)

Black-balling

Let’s now say that we want our fraud checking to take a different shape. We don’t have a single fraud-checking service any more; we have a series of different fraud checkers that each do different things. One might do a “number of times this card number has been seen in the last minute” check and another might do an “Is this a known-compromised card?” check.

In this scenario, we might just want to ask “Does anybody object to this transaction?” and let different services reply as they will.

The first cut of our billing handler could now look something like this:

public class BillCustomerCommandHandler: IHandleCommand<BillCustomerCommand>
{

    ...

    [PCIAuditable]
    public async Task Handle(BillCustomerCommand busCommand)
    {
        var customerId = busCommand.CustomerId;
        var amount = busCommand.Amount;

        var creditCardDetails = _secureVault.ExtractSecuredCreditCardDetails(customerId);

        var fraudCheckResponses = await _bus.MulticastRequest(new FraudCheckRequest(creditCardDetails, amount),
                                                              TimeSpan.FromSeconds(1));

        if (fraudCheckResponses.Any())
        {
            await _bus.Publish(new FraudulentTransactionAttemptEvent(customerId, amount));
        }
        else
        {
            _cardGateway.ProcessPayment(creditCardDetails, amount);
            await _bus.Publish(new TransactionCompletedEvent(customerId, amount));
        }
    }
}

Let’s take a closer look.

var fraudCheckResponses = await _bus.MulticastRequest(new FraudCheckRequest(creditCardDetails, amount),
                                                      TimeSpan.FromSeconds(1));

This line of code is now fetching an IEnumerable from our fraud checking services rather than a single response. We're waiting for one second and then checking if there were any responses received. This means that we can now use a "black-ball" style pattern (also known as "speak now or forever hold your peace") and simply allow any objectors to object within a specified timeout. If nobody objects then the transaction is presumed non-fraudulent and we process it as per normal.

One optimisation we can now make is that we can choose to take:

  1. The first response.
  2. The first n responses.
  3. All the responses within the specified timeout.

In this case, a slightly tidied version could look like this:

var isFraudulent = await _bus.MulticastRequest(new FraudCheckRequest(creditCardDetails, amount),
                                                TimeSpan.FromSeconds(1))
                             .Any();

if (isFraudulent)
{
    await _bus.Publish(new FraudulentTransactionAttemptEvent(customerId, amount));
}
else
{
    _cardGateway.ProcessPayment(creditCardDetails, amount);
    await _bus.Publish(new TransactionCompletedEvent(customerId, amount));
}

Note the call to .Any(). Nimbus will opportunistically return responses off the wire as soon as they arrive, meaning that your calls to Enumerator.GetNext() will only block until there’s another message waiting (or the timeout elapses). If we’re only interested in whether anyone replies, any reply is enough for us to drop through immediately. If nobody replies saying that the transaction is fraudulent then we simply drop through after one second and continue on our merry way.

We could also use some kind of .Where(response &eq;> response.LikelihoodOfFraud > 0.5M).Any() or even a quorum/voting system - it’s entirely up to you.


Eventing with Nimbus

27 February 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Open-source software (OSS)

In this article we’re going to have a look at some of the eventing patterns we have in Nimbus

In Command handling with Nimbus we saw how we deal with fire-and-forget commands. This time around we care about events. They’re still fire-and-forget, but the difference is that whereas commands are consumed by only one consumer, events are consumed by multiple consumers. They’re broadcast. Mostly.

To reuse our scenario from our previous example, let’s imagine that we have a subscription-based web site that sends inspirational text messages to people’s phones each morning.

Scenario #1: Monolithic web application (aka Another Big Ball of Mud™).

Big Ball of Mud

We have a web application that handles everything from sign-up (ignoring for now where and how our data are stored) through to billing and the actual sending of text messages. That’s not so great in general, but let’s have a look at a few simple rules:

  1. When a customer signs up they should be sent a welcome text message.
  2. When a customer signs up we should bill them for their first month’s subscription immediately.
  3. Every morning at 0700 local time each customer should be sent an inspirational text.

Business is great. (It really is amazing what people will pay for, isn’t it?) Actually… business is so great that we need to start scaling ourselves out. As we said before, let’s ignore the bit about where we store our data and assume that there’s just a repository somewhere that isn’t anywhere near struggling yet. Unfortunately, our web-server-that-does-all-the-things is starting to chug quite a bit and we’re getting a bit worried that we won’t see out the month before it falls over.

But hey, it’s only a web server, right? And we know about web farms, don’t we? Web servers are easy!

We provision another one…

Multiple servers means multiple text messages

… and things start to go just a little bit wrong.

Our sign-up still works fine - the customer will hit either one web server or the other - and our welcome message and initial invoice get generated perfectly happily, too. Unfortunately, every morning, our customer is now going to receive two messages: one from each web server. This is irritating for them and potentially quite expensive for us - we’ve just doubled our SMS delivery costs. If we were to add a third (or tenth) web server then we’d end up sending our customer three (or ten) texts per morning. This is going to get old really quickly.

Scenario #2: Distributed architecture: a first cut

The obvious mistake here is that our web servers are responsible for way more than they should be. Web servers should… well… serve web pages. Let’s re-work our architecture to something sensible.

Web servers backed by single application server

We’re getting there. This doesn’t look half-bad except that we’ve now simply moved our problem of scaling to one layer down. We can have as many web servers as we want, now, but as soon as we start scaling out our app servers we run into the same problem as in Scenario #2.

Scenario 3: Distributed event handlers

Our next step is to separate some responsibilities onto different servers. Let’s have a look at what that might look like:

Single distributed worker for each action

This looks pretty good. We’ve split the load away from our app server onto a couple of different servers that have their own responsibilities.

This is the first example that’s actually worth writing some sample code for. Our code in this scenario could look something like this in our sign-up logic:

public async Task SignUp(CustomerDetails newCustomer)
{
	// do sign-up stuff
	await _bus.Publish(new CustomerSignedUpEvent(newCustomer));
}

and with these two handlers for the CustomerSignedUpEvent:

namespace WhenACustomerSignsUp
{
	public class SendThemAWelcomeEmail: IHandleMulticastEvent<CustomerSignedUpEvent>
	{
		public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
		{
			// send the customer an email
		}
	}

	public class GenerateAnInvoiceForThem: IHandleMulticastEvent<CustomerSignedUpEvent>
	{
		public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
		{
			// generate an invoice for the customer
		}
	}
}

We’re actually in pretty good shape here. But business is, by now, booming, and we’re generating more invoices than our single invoicer can handle. So we scale it out…

Multiple distributed workers for some handlers

… and wow, but do the phones start ringing. Can you spot what we’ve done? Yep, that’s right - every instance of our invoicer is happily sending our customers an invoice. When we had one invoicer, each customer received one invoice and all was well. When we moved to two invoicers, our customers each received two invoices for the same service. If we were to scale to ten (or a thousand) invoicers then our customers would receive ten (or a thousand) invoices.

Our customers are not happy.

Scenario #4: Competing handlers

Here’s where we introduce Nimbus’ concept of a competing event handler. In this example:

public class GenerateAnInvoiceForThem: IHandleMulticastEvent<CustomerSignedUpEvent>
{
	public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
	{
		// generate an invoice for the customer
	}
}

we implement the IHandleMulticastEvent<> interface. This means that every instance of our handler will receive a copy of the message. That’s great for updating read models, caches and so on, but not so great for taking further action based on events.

Thankfully, there’s a simple solution. In this case we want to use a competing event handler, like so:

public class GenerateAnInvoiceForThem: IHandleCompetingEvent<CustomerSignedUpEvent>
{
	public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
	{
		// generate an invoice for the customer
	}
}

By telling Nimbus that we only want a single instance of each type of service to receive this event, we can ensure that our customers will only receive one invoice no matter much much we scale out.

A key concept to grasp here is that a single instance of each service type will receive the message. In other words:

  • Exactly one instance of our invoicer will see the event
  • Exactly one instance of our welcomer will see the event

Combining multicast and competing event handlers

It’s entirely possible that our invoicer will want to keep an up-to-date list of customers for all sorts of reasons. In this case, it’s likely that our invoicer will want to receive a copy of the CustomerSignedUpEvent even if it’s not the instance that’s going to generate an invoice this time around.

Our invoicer code might now look something like this:

namespace WhenACustomerSignsUp
{
	public class GenerateAnInvoiceForThem: IHandleCompetingEvent<CustomerSignedUpEvent>
	{
		public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
		{
			// only ONE instance of me will have this handler called
		}
	}

	public class RecordTheCustomerInMyLocalDatabase: IHandleMulticastEvent<CustomerSignedUpEvent>
	{
		public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
		{
			// EVERY instance of me will have this handler called.
		}
	}
}

So there we go. We now have a loosely-coupled system that we can scale out massively on demand, without worrying about concurrency issues.

This is awesome! But how do we send our inspirational messages every morning?

Sneak peek: have a look at the SendAt(…) method on IBus. We’ll cover that in another article shortly.


Command handling with Nimbus

26 February 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Open-source software (OSS)

We’ve had a quick introduction to Nimbus, so let’s look at some messaging patterns in a bit more detail.

Picture this: you’re running a successful company that sends people a “Good morning!” text message every day. (It’s amazing what people will pay for, isn’t it?) People pay $5/month for your inspirational text message and business is great.

Let’s say you have some clever logic that decides what to send people in the morning. Let’s call that the Thinker. The Thinker is quite fast and it can churn out many inspirational thoughts per second. The Thinker code initially looks something like this:

Scenario 1: Big Ball of Mud

public class Thinker
{
    private readonly SMSGateway _smsGateway;

    public Thinker(SMSGateway smsGateway)
    {
        _smsGateway = smsGateway;
    }

    public void SendSomethingInspirational(Subscriber[] subscribers)
    {
        foreach (var subscriber in subscribers)
        {
            var inspirationalThought = ThinkOfSomethingNiceToSay();
            _smsGateway.SendSMS(subscriber.PhoneNumber, inspirationalThought);
        }
    }

    private string ThinkOfSomethingNiceToSay()
    {
        throw new NotImplementedException();
    }
}

which means our logical design looks like this:

Thinker coupled to SMS gateway

That’s a bit silly - we’ve coupled our Thinker to our SMS gateway, which means two things:

  1. The Thinker can only generate messages as fast as the SMS gateway can receive them; and
  2. If the SMS gateway falls down, the Thinker can’t work.

Let’s try decoupling them and see how we go.

Scenario 2: Decoupled Thinker from SMS gateway

In this scenario, our code looks like this:

public class Thinker
{
    private readonly IBus _bus;

    public Thinker(IBus bus)
    {
        _bus = bus;
    }

    public void SendSomethingInspirational(Subscriber[] subscribers)
    {
        foreach (var subscriber in subscribers)
        {
            var inspirationalThought = ThinkOfSomethingNiceToSay();
            _bus.Send(new SendSMSCommand(subscriber.PhoneNumber, inspirationalThought));
        }
    }

    private string ThinkOfSomethingNiceToSay()
    {
        throw new NotImplementedException();
    }
}

and we have a handler that looks something like this:

public class SendSMSCommandHandler: IHandleCommand<SendSMSCommand>
{
    private readonly SMSGateway _smsGateway;

    public SendSMSCommandHandler(SMSGateway smsGateway)
    {
        _smsGateway = smsGateway;
    }

    public async Task Handle<T>(T)
    {
    }

    public async Task Handle(SendSMSCommand busCommand)
    {
        _smsGateway.SendSMS(busCommand.PhoneNumber, busCommand.Message);
    }
}

Our topology now looks something like this:

Decoupled Thinker from SMS sender

This is much better. In this scenario, our Thinker can generate inspirational thoughts as fast as it can think and simply queue them for delivery. If the SMS gateway is slow or goes down, the Thinker isn’t affected and the texts can be delivered later by the retry logic built into the bus itself.

What? Retry logic? Did we forget to mention that we get that for free? If your SendSMSCommandHandler class blows up when it’s trying to send a message, don’t worry about handling exceptions or failing gracefully. Just fail. Nimbus will catch any exception you throw and automatically put the message back onto the queue for another attempt. If the gateway has a long outage, there are compensatory actions we can take pretty cheaply, too. (Dead letter queues are a topic for another day, but they’re there.)

So… business is great, and we’ve hit the front page of Reddit. Everyone wants our inspirational thoughts. As far as our Thinker is concerned, that’s no problem - it can generate thousands of happy thoughts per second all morning. Our telco’s SMS delivery gateway looks like it’s getting a bit swamped, though. Even though we’ve decoupled our Thinker from our SMS sender, messages are still taking too long to arrive and the SMS gateway itself is just too slow.

Scenario 3: Scaling out our command handlers

This is where we discover that distributed systems are pure awesome.

When designed well, a good system will allow us to scale parts out as necessary. We’re going to scale out our SMS sending architecture and solve our throughput problem. All we need to do is:

  1. Spin up another SendSMSCommandHandler server; and
  2. Point it at a different telco’s SMS gateway.

Job done.

What - we didn’t have to reconfigure our Thinker to send to two gateways? And what about the first SMS gateway? Doesn’t it need to know about load balancing? Well… no.

This is what our system now looks like:

Scaled out SMS sender

Stuff we get for free out of this design includes:

  • Zero code changes to our Thinker
  • Zero code changes to our existing SMS sender
  • Automatic, in-built load-balancing between our two SMS senders

Implicit load-balancing is part and parcel of a queue-based system like Nimbus. Under the covers, there’s a message queue (we’ll talk about queues soon) for each type of command. Every application instance that can handle that type of command just pulls messages from the head of the command queue as fast as it can. This means that there’s no load-balancer to configure and there are no pools to balance - it all just works. If one handler is faster than another (say, for instance, you have different hardware between the two) then the load will be automatically distributed between the two just because each node will pull commands at a different rate.

How cool is that?

Stay tuned for more messaging patterns and how to use them with Nimbus.


Handler interface changes in Nimbus 1.1

25 February 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Open-source software (OSS)

We’ve tweaked the handler interfaces slightly for the 1.1 release of Nimbus.

In the 1.0 series, handlers were void methods. I admit it: this was a design flaw. We thought it would make for a more simple introduction to using the bus - and it did - but the trade-off was that it was much more complicated to do clever stuff.

Consider this handler method:

public void Handle(DoFooCommand busCommand)
{
    DoFoo();
}

Pretty straight-forward, right? Except what happens when doing foo requires us to publish an event afterwards?

public void Handle(DoFooCommand busCommand)
{
    DoFoo();
    _bus.Publish(new FooWasDoneEvent());
}

That doesn’t look so bad, except that we’ve missed the fact that _bus.Publish actually returns a Task and executes asynchronously. What if doing foo required us to ask a question first?

public void Handle(DoFooCommand busCommand)
{
    var result = await _bus.Request(new WhoLastDidFooRequest());
    DoFoo(result.WhoDunnit);
    _bus.Publish(new FooWasDoneEvent());
}

Now things are a bit more complicated. The above method won’t compile, as it’s not marked as async. But there’s a simple fix, right?

public async void Handle(DoFooCommand busCommand)
{
    var result = await _bus.Request(new WhoLastDidFooRequest());
    DoFoo(result.WhoDunnit);
    await _bus.Publish(new FooWasDoneEvent());
}

Problem solved. Except that it’s not. Because although our code will compile and execute happily, what’s going on under the covers is that the Nimbus command dispatcher has no easy way of waiting for your async handler method to complete. As far as the dispatcher is concerned, your handler executed successfully - and really quickly - and we then mark the message as successfully handled.

Think about what happens in this example case below (courtesy of the immortal Krzysztof Kozmic via this GitHub issue):

public async void Handle(DoFooCommand busCommand)
{
    throw new InvalidOperationException("HA HA HA, you can't catch me!");
}

As far as the dispatcher is concerned, your handler method executed just fine. And now we’ve broken our delivery guarantee. Not so good.

The fix for this is simple:

public async Task Handle(DoFooCommand busCommand)
{
    throw new InvalidOperationException("HA HA HA, you can't catch me!");
}

Done. Your method now returns a Task - which the Nimbus dispatcher can await - and if your handler throws then we know about it and can handle it appropriately. So your actual handler would look like this:

public async Task Handle(DoFooCommand busCommand)
{
    var result = await _bus.Request(new WhoLastDidFooRequest());
    DoFoo(result.WhoDunnit);
    await _bus.Publish(new FooWasDoneEvent());
}

So why is this worth an article? Because in order to make this change, we’ve had to alter the IHandleCommand

, IHandleRequest etc. interfaces to have the handler methods return tasks. This:
public interface IHandleCommand<TBusCommand> where TBusCommand : IBusCommand
{
    void Handle(TBusCommand busCommand);
}

is now this:

public interface IHandleCommand<TBusCommand> where TBusCommand : IBusCommand
{
    Task Handle(TBusCommand busCommand);
}

This means that when you upgrade to the 1.1 versons of Nimbus you’ll need to do a quick Ctrl-Shift-H for all your instances of “void Handle(“ and replace them with “Task Handle(“.


Nimbus: What is it and why should I care?

23 February 2014 00:00 AEST
.NET C# Event-driven architecture Nimbus Distributed systems Open-source software (OSS)

At my current employer, we deal with a large number of problems whose solution is a distributed system of one kind or another. We’ve used several messaging frameworks in past projects - everything from raw MSMQ through to RabbitMQ, MassTransit, NServiceBus and all sorts of other odds and ends.

All of them had their weak points and we kept finding that we had to write custom code no matter which framework we chose.

So… Damian Maclennan and I built a thing. That thing is called Nimbus. We’re quite proud of it. And here’s why you want to use it.

What is Nimbus?

Nimbus is a nice, easy-to-use service bus framework built on top of the Azure Message Bus and Windows Service Bus stack.

It runs on both cloud-based service bus instances and on-premises installations of Windows service bus and will happily support federation between the two.

Why do I want it?

It’s easy to get up and running

Getting an instance up and running is fast and easy. You’ll need an Azure account (free) and a service bus namespace if you don’t have a local Windows Service Bus installation, after which:

Install-Package Nimbus

followed by some simple configuration code (just copy/paste and change your application name and connection string):

var connectionString = ConfigurationManager.AppSettings["AzureConnectionString"];
var typeProvider = new AssemblyScanningTypeProvider(Assembly.GetExecutingAssembly());
var messageHandlerFactory = new DefaultMessageHandlerFactory(typeProvider);

var bus = new BusBuilder().Configure()
                            .WithNames("TODO Change this to your application's name", Environment.MachineName)
                            .WithConnectionString(connectionString)
                            .WithTypesFrom(typeProvider)
                            .WithDefaultHandlerFactory(messageHandlerFactory)
                            .WithDefaultTimeout(TimeSpan.FromSeconds(10))
                            .Build();
bus.Start();
return bus;

That’s it. You’re up and running.

It’s really easy to use

Want to send a command on the bus?

public async Task SendSomeCommand()
{
    await _bus.Send(new DoSomethingCommand());
}

Want to handle that command?

public class DoSomethingCommandHandler: IHandleCommand<DoSomethingCommand>
{
    public async Task Handle(DoSomethingCommand command)
    {
        //TODO: Do something useful here.
    }
}

It supports both simple and complicated messaging patterns

Nimbus supports simple commanding and publish/subscribe in a way that you’re probably familiar with if you’ve ever used NServiceBus or MassTransit.

It also supports an elegant, awaitable request/response, like so:

var response = await _bus.Request(new HowLongDoPizzasTakeRequest());

It also supports much more advanced patterns like publish and competing subscribe and multicast request/response. I’ll cover each of these in subsequent articles.

Did we mention that it’s free? And open-source? And awesome?

There’s no “one message per second” limit or anything else with Nimbus. It’s free. And open source. You can clone it for yourself if you want - and we’d love it if you did and had a play with it.

If you’d like a feature, ask and we’ll see what we can do. If you need a feature in a hurry, you can just code it up and send us a pull request.

Please… have a look and let us know what you think.


RSS as a primary source of truth

18 February 2014 00:00 AEST
Meta

I’m experimenting with making RSS my authoritative source for blog posts.

I was thinking the other day about how RSS still seems to be the poor second cousin of most blogging platforms. Everything (well, everything civilised) generates RSS feeds but it’s done as an after-thought, not as the primary experience.

As a software engineer, I tend to consume most content directly from my RSS reader. I want that to be the most polished experience. I also want to be able to set a bunch of ifttt.com recipies for different feeds, including feeds of my own activities.

I’ve also re-jigged the BlogMonster library to generate RSS as its primary source of truth. It still works if you want to stick it on a web page, of course, but the underlying model is now a SyndicationFeed and your individual blog posts are SyndicationItem instances. You can, of course, simply bind those to a Razor view.


With thanks to the late Aaron Swartz, a source of truth in more ways than one.


Introducing ConfigInjector

4 October 2013 00:00 AEST
.NET C# Configuration management ConfigInjector Open-source software (OSS)

So I’ve been using this pattern for a while and promising to blog it for almost as long.

Code is on GitHub; package is on NuGet. Here you go :)

How application settings should look:

Here’s a class that needs some configuration settings:

public class EmailSender : IEmailSender
{
    private readonly SmtpHostConfigurationSetting _smtpHost;
    private readonly SmtpPortConfigurationSetting _smtpPort;

    public EmailSender(SmtpHostConfigurationSetting smtpHost,
                       SmtpPortConfigurationSetting smtpPort)
    {
        _smtpHost = smtpHost;
        _smtpPort = smtpPort;
    }

    public void Send(MailMessage message)
    {
        // NOTE the way we can use our strongly-typed settings directly as
        // a string and int respectively
        using (var client = new SmtpClient(_smtpHost, _smtpPort))
        {
            client.Send(message);
        }
    }
}

Here’s how we declare the settings:

// This will give us a strongly-typed string setting.
public class SmtpHostConfigurationSetting : ConfigurationSetting<string>
{
}

// This will give us a strongly-typed int setting.
public class SmtpPortConfigurationSetting : ConfigurationSetting<int>
{
    protected override IEnumerable<string> ValidationErrors(int value)
    {
        if (value <= 0) yield return "TCP port numbers cannot be negative.";
        if (value > 65535) yield return "TCP port numbers cannot be greater than 65535.";
    }
}
Here’s how we set them in our [web app].config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="SmtpHostConfigurationSetting" value="localhost" />
    <add key="SmtpPortConfigurationSetting" value="25" />
  </appSettings>
</configuration>

… and here’s how we provide mock values for them in our unit tests:

var smtpHost = new SmtpHostConfigurationSetting {Value = "smtp.example.com"};
var smtpPort = new SmtpPortConfigurationSetting {Value = 25};

var emailSender = new EmailSender(smtpHost, smtpPort);

emailSender.Send(someTestMessage);

Getting started

In the NuGet Package Manager Console, type:

Install-Package ConfigInjector

then run up the configurator like this:

ConfigurationConfigurator
    .RegisterConfigurationSettings()
    .FromAssemblies(/* TODO: Provide a list of assemblies to scan for configuration settings here  */)
    .RegisterWithContainer(configSetting => /* TODO: Register this instance with your container here */ )
    .DoYourThing();

You can pick your favourite container from the list below or roll your own.

Getting started with Autofac

var builder = new ContainerBuilder();

builder.RegisterType<DeepThought>();

ConfigurationConfigurator
    .RegisterConfigurationSettings()
    .FromAssemblies(typeof (DeepThought).Assembly)
    .RegisterWithContainer(configSetting => builder.RegisterInstance(configSetting)
                                                   .AsSelf()
                                                   .SingleInstance())
    .DoYourThing();

return builder.Build();

Getting started with Castle Windsor

var container = new WindsorContainer();

container.Register(Component.For<DeepThought>());

ConfigurationConfigurator
    .RegisterConfigurationSettings()
    .FromAssemblies(typeof (DeepThought).Assembly)
    .RegisterWithContainer(configSetting => container.Register(Component.For(configSetting.GetType())
                                                                        .Instance(configSetting)
                                                                        .LifestyleSingleton()))
    .DoYourThing();

return container;

Getting started with Ninject

var kernel = new StandardKernel();

kernel.Bind<DeepThought>().ToSelf();

ConfigurationConfigurator
    .RegisterConfigurationSettings()
    .FromAssemblies(typeof (DeepThought).Assembly)
    .RegisterWithContainer(configSetting => kernel.Bind(configSetting.GetType())
                                                  .ToConstant(configSetting))
    .DoYourThing();

return kernel;

Happiness is a legitimate goal

21 August 2013 00:00 AEST
Career

It’s okay to put “be happy” at the top of your to-do list. It’s okay to set that as a goal. And it’s okay to talk about how to achieve it.

This is a universal problem but I see it a lot more in adults than children, and usually more in males than females.

I teach, coach and mentor a lot of people and it astounds me that when I ask them if they’re happy with how things are going, almost invariably they say no. When I ask if they’d like to feel happy, they look at me as if I’m a Martian. It’s almost as if they think that they should just be happy, and that if they’re not then there’s something wrong with them rather than their situation.

It takes courage to accept that you’re not happy and to try to change it, because what if you fail? Then you’re both unhappy and a failure? Bollocks.

It’s okay to do things just because they make us happy. It’s okay to change things because the status quo is making us unhappy.

Being happy is a valid desire and a valid goal.


Quick demonstration of continuous delivery

24 July 2013 00:00 AEST
Continuous delivery

I’m running Readify’s Making Legacy Apps Awesome workshop right now.

Part of making legacy apps easy to maintain is getting a deployment pipeline functioning.

This post is a quick demo of how easy it should be to deploy code into production.

git push

Say hello to the nice people of the world, children :)


The story so far: a fairy tale...

11 April 2013 00:00 AEST
coding-for-fun-and-profit

I’ve just pushed the latest to the Making Legacy Apps Awesome workshop’s Git repo.

You’ll note that the app is actually quite small. You could say that that’s because Krzysztof and I felt dirty writing it (which we did) but in reality it’s small because we’re going to cover a large number of topics in two days and we don’t want people getting bogged down in doing the same repetitive refactorings time after time.

The code’s probably going to undergo a few tweaks before the workshop but it’s worth cloning now for links, download instructions and other odds and ends.

Oh, and there’s a story.

The story so far…

Once upon a time in land far, far away, there was a little town called Derp Town.

The citizens of Derp Town were a proud bunch, and one day they decided that they would create their own university for the betterment of all human-kind (and, of course, to show those snobby villagers over in Herp Town that they were not to be outdone).

The villagers were a poor but sincere lot and they were determined to build their university the Right Way™. They formed a Board of Directors (this was Modern Times, after all, and the old, fuddy-duddy Academic Council could leave their robes at home, thankyouverymuch) and resolved that their university would do the best of everything. It would be the most grand university in all the land. (Being more grand than the nearby Herp College, of course, was a thought that occurred to nobody at all and the citizens would not have dreamed of slighting their neighbours.)

The university had no technology budget or staff to speak of yet, but that was not to stop them from becoming the market leader in technological engagement with their students, who would travel over all the lands and across all the seas to study at such a prestigious institution and gaze with awe upon the wonder that was Derp University’s… Enrolment Portal.

Undeterred by their lack of suitably qualified engineers, and being a resourceful lot, they asked the teenage child of the Dean of the Rainbows and Unicorns faculty to write their grand portal using the latest and greatest technologies of the time. It would, they claimed, be a sight to behold.

The portal was unveiled, and all gasped with wonder, for it was great. The villagers rejoiced and praised the Board of Directors for their foresight and wisdom.

Of course, there were a few hitches along the way; a few things that went mysteriously wrong and a couple of enrolments that got eaten by the Terrible Greedy Fossifoo who snuck into the system one night, but by and large the villagers were pleased.

One fine day, the Dean’s teenage offspring decided to embark upon an adventure. The child packed some belongings, said some good-byes and set off to find the Ivory Tower of the Architect. The villagers rejoiced, for the Architect would surely praise their Enrolment Portal and speak of them in tones of wonder. (And not at all smug, of course, that Herp College had never seen or spoken to the Architect or been so praised.)

During the time in which the Dean’s teenage child and creator of the Enrolment Portal was adventuring, it came time for the villagers to extend the portal. While it had been quite good for the first semester of Derp University’s existence, a few (very minor, of course) shortcomings had come to light. Although the portal’s author had been the only one to know all the ins and outs of the system, the villagers were confident that these shortcomings could surely be quickly addressed by the villagers themselves if they merely put their minds to it.

Weeks came and went; mid-term examinations happened; students caroused; the leaves began to brown and the seasons to turn. The villagers were no closer to making the required changes to their vaunted portal, and time was running out.

The villagers knew fear.

The villagers worked, and patched, and cobbled, and hacked, and eventually they came to accept that their vaunted Enrolment Portal was unknowable by any but its author, and its author was nowhere to be found.

The villagers knew despair.

In their misery, the villagers came to accept that what they had created for their university had not been Done Right This Time™, but instead was a Brand New Legacy Application™.

A young traveller from far away chose this moment to enter the village, seeking food and shelter. The traveller carried in their luggage a USB stick, upon which the villagers discovered wondrous tomes of knowledge and tools of refactoring. In desperation, the villagers begged the traveller to renew their hopes and restore their grand Enrolment Portal to its former glory.

Despite being young and inexperienced, the traveller took pity upon the villagers and agreed to aid them.

To be continued…


Two-day workshop: Making Legacy Applications Awesome

12 March 2013 00:00 AEST
coding-for-fun-and-profit

I’m running a two-day workshop with Krzysztof Kozmic in Brisbane in April.

If you’ve ever had the privilege of maintaining a legacy application once it’s been in production for a while, you’ll likely appreciate some of the lessons on offer.

Once in a blue moon software engineers have the privilege of embarking on a new project: to do away with the old; to start over; to Do It Right This Time™. More often, software developers are saddled with existing legacy applications with poor code quality, no regression tests to speak of and frustrated, angry customers and stakeholders to boot. This downward spiral is all too common in the software industry and it would appear that there’s no way out – or is there?

What makes a legacy application? Every big ball of mud had its origins in a green-fields project. Where do we draw the distinction? And why does it matter? Isn’t every application a legacy after it’s released? How do we maintain our software so that “legacy” isn’t a bad word any more – and how can we improve our existing software to that standard?

This two-day workshop will start with the exploration of an utter disaster of a codebase. We’ll investigate how it got into that state in the first place, decide on an end goal, devise a rough strategy to get there and then fire up the compiler. We’ll finish the workshop with a well-factored, usable, maintainable application and a whole lot of appreciation for the tools available to us.

At each stage of the journey you’ll be given the opportunity to have a go at refactoring the application to the next point, after which you’ll be able to pull that completed exercise from GitHub. You will be writing code and you won’t be left behind.

You will need:

  • Laptop (WiFi key will be shared on the day)
  • Visual Studio 2012
  • SQL Server 2012 Express Edition
  • Git (download TortoiseGit if you’re unfamiliar with Git)

You will want:

  • ReSharper (trial versions are available from jetbrains.com)
  • Visual Studio 2012
  • A pair-programming partner. Partners will be arranged on the day if necessary but you’ll probably prefer to bring a colleague. If you want to go it alone, that’s fine, too.

In advance:

git clone git://github.com/Readify/MakingLegacyApplicationsAwesome.git

Further instructions for the workshop will be made available within this repository so make sure you do this before the day!

Tickets are available via the Readify event page.


We tried agile and it didn't work

10 March 2013 00:00 AEST
Agile

So, you tried agile and it didn’t work?

Let’s first look at this via an analogy: You fall into a lake. You try swimming but you’re not very good at it. Should you stop?

So why are agile methods supposed to work in the first place? Forget the hype about agile. Forget Scrum, Kanban, Lean and all those other buzzwords and, instead, consider this very simple question. Why does agile work? Or, at least, why should we try it again when we tried it once or twice and all we encountered was failure?

When you say “We tried agile and it didn’t work,” what you’re really saying is “We tried agile and we kept running into failure. Releasing so frequently was hard; our testing cycle was too long to fit into a sprint; our developers couldn’t keep up; we kept getting stuff wrong.”

In other words, your agile methodology was doing exactly what it was supposed to do: highlight failures early.

When I hear “We tried agile and it didn’t work,” I hear “We tried agile and it worked too well but we didn’t like the message so we stopped listening.”

I hate to break it to you, but highlighting failures is actually the entire reason for existence of an agile process. Everything else is window dressing.

  • Every feedback point is an opportunity to identify failings, both large and small.
  • Every missed user story is a message that the team can’t yet estimate well enough.
  • Every bug discovered by end users rather than automated tests tells the story of human error.
  • Every pain point is a warning to fix it before it gets worse.

Teams that “go agile” usually experience pain because previously they just deferred the pain that already awaits them by only trying to release their software at the end of a multi-year project. It’s not that there’s less pain in single-release projects; just that all the pain is felt at once. That kind of pain is often enough to cause individual nervous breakdowns and company bankruptcies.

When you feel the pain from going agile, don’t view it as failure. View it as the process’s helpfully surfacing problems early so that you can deal with them while there’s still time.


Inversion of control from first principles: Top Gear style

2 December 2012 00:00 AEST
Conference Community Inversion of control Test-driven development

DDD Brisbane 2012 yesterday was great fun. If you weren’t there, you really missed out.

Massive thanks to Damian Brady, Bronwen Zande, John O’Brien, Brendan and Lin Kowitz and David Cook for putting on a great event.

The video from my talk (abstract) is now online:

Some of my favourite tweets from the talk:


Vote for my DDD Brisbane talk: Inversion of Control from First Principles: Top Gear Style

3 November 2012 00:00 AEST
Conference Community Inversion of control Test-driven development

So I’m throwing my hat into the ring again to present at DDD Brisbane.

DDD Brisbane 2012 is on the 1st of December (a Saturday) and sessions are peer-voted so you get to see what you want to see.

Inversion of Control from First Principles: Top Gear Style

Tonight: James May writes “Hello, World!”, Richard Hammond cleans up the mess and Clarkson does some shouting.

When most people first try to apply good OO design the wheels fall off as soon as their app starts to get complex. TDD, Mock

TDD, IoC, WTF? What are these TLAs, why should you care and where’s that owner’s manual when you need it, anyway?

Most people are afraid of trying TDD and IoC because they don’t really know what they’re doing. In true Top Gear spirit we’re not going to let ignorance prevent us from having a go, so sit back and watch us point a compiler in the general direction of France and open the throttle.

In this talk we’re going to introduce inversion of control from first principles so that it’s not just an abstract concept but a real, “I finally get it” tool in your toolbox. We’ll start with “Hello, world!” and finish by writing a functioning IoC container - live, in real-time and without a seat-belt - and you can take the code home afterwards and test-drive it yourself.

In the right hands, IoC is a very sharp tool. Just don’t let Clarkson drop it on his foot…

*Actual Top Gear presenters may not be present. But it will be awesome anyway.

You should submit something, too.

Don’t forget to vote for me :)


In software, the iron triangle is a lie

31 August 2012 00:00 AEST
coding-for-fun-and-profit

Everyone’s heard the old adage, “Fast, good, cheap: pick two.” It’s called the Iron Triangle or Project Triangle.

Fast, good, cheap

I’m not going to make this argument about the world in general but in software this just doesn’t work.

Why? Because software quality is paramount and poor-quality software is a complete showstopper as far as “fast” is concerned. You can’t build any decent-sized piece of software on a poor foundation. If the code is good it will be easy and quick to change. If the code is poor it will be slow and painful to change.

Cheap will start out cheap and nasty by design but will morph into “expensive and nasty” very, very quickly, and then you’ll be stuck with your expensive-yet-cheap-and-nasty legacy application and a team of developers quickly heading for the door before the midden hits the windmill.

In software your best options are “fast and good” (if you can find a crack team) or “slow and good” but neither of those is cheap.


What risks are you taking with your business?

21 August 2012 00:00 AEST
Governance Risk management Technical debt

I had a potential client contact us a while ago. We hadn’t dealt with them before and they didn’t end up retaining us - largely, I think, because the message about how much trouble they were in might have been a bit too unpalatable to heed.

They’re in a world of pain through a combination of bad luck and poor planning although, to be fair, it’s more of the latter.

I can’t help you with bad luck but I can prompt you to plan for it.

If you ship software, please ask yourself these questions:

  1. If you had to ship a build tomorrow, could you?
  2. How long would it take? Be honest - a day? A week? A month? A year? Longer?
  3. What dependencies do you have that could cause you to need to ship one?
    • Third-party web services?
    • iOS provisioning profiles?
    • Expired x.509 certificates?
    • Changes to certificate revocation lists?
    • A critical security flaw?
    • A leap-year bug?
    • A leap-second bug?
    • An operating system patch?
  4. What monitoring do you have in place so that you’re the first to know about any of these problems?
  5. How much will it hurt if any of these fails?
  6. How quickly do you need to be back up and running?
  7. How many people are going to sue you if your software/platform/application falls down? And for how much?
  8. How much do you stand to lose?

Back to that potential client: I honestly don’t think their business is going to survive this particular flavour of disaster. In other words, I think the entire company is going to fold - and all because someone else moved their cheese and they didn’t have a contingency plan. I wish them the best but I can’t help them now - not at this late stage.

I can’t help them but I can remind you that the unexpected does happen, and will to you at some point. If your answers to any of the questions above frighten you… better me than fate :)

UPDATE: It brings me no happiness to report that they indeed did go bankrupt. Please don’t let that happen to you for such a preventable reason.


Introducing YACLP: Yet Another Command-Line Parser

28 June 2012 00:00 AEST
Open source C# .NET

It’s on NuGet:

Install-Package YACLP

Why another one?

Because there were a bunch out there but most of them focused more on the parsing than on being quick and easy to call.

I want my command-line parser to not only parse arguments (which it does, but isn’t very flexible about) but to automatically generate a usage message so that I don’t have to.

Simple Usage

var options = DefaultParser.ParseOrExitWithUsageMessage<MyCommandLineParameters>(args);

Recommendations

I’d recommend using an IConfiguration or similar interface so that anything that depends on it doesn’t need to know about command-line parameters. (Or, preferably, multiple interfaces according to the Interface Segregation Principle with different interfaces for different parts of your configuration. But I digress.)

Our main program would look like this:

public class Program
{
    private static void Main(string[] args)
    {
        var configuration = DefaultParser.ParseOrExitWithUsageMessage<CommandLineParameters>(args);

        new Greeter(configuration).SayHello();
    }
}

… and our Greeter like this:

public class Greeter
{
    private readonly IConfiguration _configuration;

    public Greeter(IConfiguration configuration)
    {
        _configuration = configuration;
    }

    public void SayHello()
    {
        var message = string.IsNullOrEmpty(_configuration.Surname)
                          ? string.Format("Hi, {0}!", _configuration.FirstName)
                          : string.Format("Hello, Mr/Ms {0}", _configuration.Surname);

        Console.WriteLine(message);
    }
}

Note that our Greeter takes a dependency on an IConfiguration, which looks like this:

public interface IConfiguration
{
    string FirstName { get; set; }
    string Surname { get; set; }
}

… and that IConfiguration interface is implemented by our CommandLineParameters class:

public class CommandLineParameters : IConfiguration
{
    [ParameterDescription("The first name of the person using the program.")]
    public string FirstName { get; set; }

    [ParameterIsOptional]
    [ParameterDefault("Smith")]
    [ParameterDescription("The last name of the person using the program.")]
    public string Surname { get; set; }
}

The key point here is that our Greeter knows absolutely nothing about command-line parameters as everything is segregated using the IConfiguration interface.


On the Principle of Least Privilege

7 June 2012 00:00 AEST
Security

The Principle of Least Privilege states that a user (or service) should be given the absolute bare minimum privileges required in order to fulfil its function.

On the surface, how could this possibly be bad? If I have everything I need in order to do my job then by definition I have everything I need. Likewise, if my app has all the privileges it needs in order to function correctly then, again, by definition it can function correctly. Right?

For the purpose of this post I’m going to focus on application security.

Where this all falls down is in defining “least privilege” in a sensible manner. How do we normally decide what privileges an application will require? When we decide on what the application will do, of course. And how do we decide what an application will do? We gather our requirements, of course. And when do we do this? We (of course, of course) gather all our requirements up-front, because that’s how we roll.

To rephrase that:

  1. We gather our requirements up-front.
  2. We know these requirements to be inaccurate, incomplete or just plain wrong.
  3. We set our security policies according to these requirements.
  4. We have our policies “signed off” by some governance group or other.
  5. We send our security requirements off to our sysadmins to implement in the form of AD security groups, firewall ACLs etc.

In other words:

We send our known-broken security requirements, based on our known-broken application requirements, off to be set in stone before we ever even ship our application.

If you’re going to set strict security policies for your app then your development team should be responsible - and held accountable - for setting sensible policies and updating them quickly according to changing requirements. If you’re going to wrap security policies in endless red tape then don’t be surprised when 1) people ask for more privileges than they need just to avoid administrative pain; and 2) your project ends up with a sub-optimal result because of a bunch of undocumented security work-arounds that decrease your overall security anyway.

You’re likely to be much better served by frequent automated and manual audits of both production and non-production environments to identify mismatches between organisational policies and configuration actuality.

TL;DR: Hire smart people. Trust them. Get out of their road. Check their homework. Hold them accountable.


If your DBA makes your schema changes, you're doing it wrong

6 June 2012 00:00 AEST
Software engineering

Does your DBA make schema changes for you? If so, why?

One of the fundamental principles of an agile team is that of cross-functionality. Everyone should have at least a passing familiarity with everyone else’s role, and there should be no one bottleneck within the team. There should also be minimal external dependencies that could prevent the team from delivering its stated sprint goal. If you have an external dependency then you’re betting your own team’s success on something that you don’t control. Tell me how that makes sense, someone?

If you have a crack DBA within your team then that’s one thing. I still don’t think it’s wise, but at least they’re operating within your team. Even so, they’re a bottleneck: if more than one person needs to make a schema change then all but the first person can hurry up and wait for the DBA to be available.

Is your DBA a developer? Does s/he have commit rights and responsibilities just like any other member of your team? Will s/he fix the build if it breaks? Does s/he decide on your class names? Or on your project/solution structure? Then why have them act as a gatekeeper for same? Your database is just a persistent record of your domain model, and should change accordingly. The schema should be updated by your own scripts, kept in your own source repository, and applied automatically by your application. It is part of your application.

Have infrastructure people do infrastructure and software developers write software. Database servers are infrastructure. Databases themselves are software components, not infrastructure.

This might sound like I’m against DBAs in principle. Not entirely, but I am against the kind who feel the need to insert themselves into application design decisions after the fact. To be fair, I’m also against developers who treat databases as an abstraction that they don’t have to understand. My position is that both attitudes are irresponsible.

As a developer using a database you’re responsible for knowing your tools and using them well, and that includes SQL. Likewise, as a DBA responsible for any component of a software development project you’re responsible for knowing your tools, and that includes being able to code to the extent that you can write migrations if that’s what your team needs.


Applications need to own their own data

2 June 2012 00:00 AEST
Software engineering

The job of a DBA is a relatively thankless one. To make things easier for all parties, there needs to be a better understanding of where the responsibilities lie between DBAs and applications developers.

Applications should be perfectly capable of maintaining their own schemas and data. A database is just a big variable store, in the same way as are the stack and the heap. It’s a clever store, yes, but still a variable store. The structure of that variable store and the access to it should be governed by the application itself. The application should be able to migrate its own schema up and down in the case of a rollout or rollback, and should need no human intervention for any part of its release.

Making changes to an app’s database outside of the deployment pipeline (for instance, inserting/deleting rows, adding indices or uniqueness constraints etc.), is putting the application into an inconsistent state with what’s in development and what will be deployed when it next goes to production. This isn’t going to help anybody and may end up crashing the app.

A good software engineer will keep mechanical sympathy in mind when doing database work, and will ask for help when out of his/her depth. A good software engineer will know about nth normal forms, indices, sharding and will be as responsible with the database(s) owned by his/her app to the same degree that he/she would be responsible with the stack and heap.

A good DBA will ensure that each app sharing a database server behaves as a good citizen and doesn’t unnecessarily or unfairly utilise resources. A good DBA will be able to help identify and debug poorly-performing queries, and contribute to changing them via the normal build/deployment pipeline.

We can play nicely in the sandpit together. Let’s do that :)


The Book of Process

14 October 2011 00:00 AEST
Software engineering Management
  1. Once upon a time, a company’s youthful founder lucked upon a successful method of performing a task.
  2. The task was profitable, and therefore it was good.
  3. The founder wrote down that method and bestowed it unto his/her minions.
  4. S/he said unto them, “This is The Process, and it is good.”
  5. The minions performed The Process until the end of days.
  6. And they all lived happily ever after.

Not quite.

The adage, “If it ain’t broke, don’t fix it” has a corollary best expressed by Tess Ferrandez: If broken it is, fix it you should. Or at least, “If broken it is, don’t inflict it on everyone just because it’s all too hard to bother changing it.”

I’m referring to internal corporate processes that serve no purpose other than demonstrating to some ISO 9000 certification minion that a documented process exists.

It seems as if every organisation, once it reaches a certain size, goes into the “create process and perish” stage. If it’s a private enterprise it’ll die a long, slow, horrible death of three-thousand triplicate signatures, but if it’s a government enterprise then it’s never going to die and we’re all going to hate it.

Is hate too strong a word? I don’t think so. Show me a single person who’s dealt with a government department and left happy, and I’ll show you someone who’s on far too many psychedelics to be on the same planet as the rest of us. We hate government agencies because they’re slow, bloated and inefficient.

So, given the choice, why do organisations choose to have processes that make them slow, bloated and despised? Your guess is as good as mine, but I think it’s to do with some misguided idea that they should be able to have any human follow the process and get the same result. Guess what, ladies and gentlemen: if you have a muppet-followable process then you’ll end up hiring muppets to implement it - which is at best embarrassing while the process makes sense, but an utter disaster once the process becomes obsolete and everyone refuses to recognise it. Per the Agile Manifesto, we value individuals and interactions over processes and tools.

The Book of Process above should read something like this:

  1. Once upon a time, a company’s youthful founder lucked upon a successful method of performing a task.
  2. The task was profitable, and therefore it was good.
  3. The founder wrote down that method and bestowed it unto his/her minions.
  4. S/he said unto them, “This is The Process, and it is good.”
  5. The minions performed The Process until the end of days.
  6. The end of days arrived with a pitchfork-waving, torch-brandishing mob of angry citizens who burned the company’s offices down around the minions.
  7. The minions could not find their backsides with both hands the “In Case of Fire” process document quickly enough to escape.
  8. And the citizens all rejoiced, and lived happily ever after.

Is process hurting your company? If so, it might be worth considering whether you actually need all your documented processes, or whether you can just set desired outcomes and performance metrics, and leave your smart people to figure things out for themselv…

… oh. I get it. Smart people. They’ve already left. Never mind.


Farewell, Steve

6 October 2011 00:00 AEST
Meta

There’s nothing I can say that hasn’t been said before by someone else, about someone else, for similar reasons. Nonetheless: today the world has lost a giant and we are all the poorer for it.

Steve Jobs changed the game so many times that people lost count. His visionary genius, his personal drive and his dedication to making absolute perfection commonplace have left an indelible legacy in which we all share.

His arrogance, his of-course-my-way-is-better approach and his unwillingness to compromise cultivated dislike amongst many, but then, nobody else gave the world the iPhone or the MacBook. Steve’s arrogance was frequently justified and, well, his way generally was often better.

Steve’s example should prompt all of us - in all industries - to treat elegant, beautiful design as a first-class consideration when creating something. If you build something people love, they will love you for it.

Farewell, Steve.


The Forgotten Convention-Based Test Harness

29 September 2011 00:00 AEST
coding-for-fun-and-profit

I’m writing another MVC3 app. I’m in the same world of pain with respect to magic strings and anonymous classes. I don’t like it here.

This is a method signature that keeps tripping me up:

Screenshot

But surely there are other overloads than that? Well, yes, and this is the list of them:

Screenshot

One of the major benefits of a strongly-typed language is not having everything dynamically typed - and that’s effectively what’s happening here, with string as a substitute for the dynamic type.

When we’re in a world of magic-string pain, the first thing to do is generally start creating some conventions. The second thing, of course, is that we test those conventions using unit tests. Hold on a second, though: we’re using a strongly-typed language and yet we’re writing unit tests to test our conventions because we’re using strings and anonymous classes? Why don’t we have a tool that does this for us?

We want a convention-based test harness that:

  1. Runs on every build.
  2. Has a set of conventions that are clear and unambiguous.
  3. Doesn’t make us manually write tests.
  4. Will auto-generate test cases for every new piece of code we write.
  5. Is refactoring-friendly.
  6. Is fast.
  7. Will fail the build if something’s wrong.

I think we’ve forgotten something important. Can anyone point to a tool that’s all of the above, comes pre-installed with every version of Visual Studio and requires zero integration work, no NuGet packages and just works?

Anyone? Anyone? No? Here’s one: csc.exe. Yep, that’s right: use the compiler.

Call me old-school, but a compiler is all of the above. Consider this method:

Screenshot

Think about it: why don’t I have to test that a and b are integers? Sure, I should be testing for edge cases here, but I don’t have to type-check or null-check my inputs. Why not? Because the compiler is enforcing the convention that when I say “int”, I mean a 32-bit integer and it simply won’t let anyone pass in anything else.

I don’t have to write a single unit test to enforce these conventions. The compiler will provide me with fast and reliable feedback - at compile time - if I’ve broken anything, which is far better than getting the feedback at run-time using unit tests (or worse, at run-time when a user hits an issue).

I think we as developers can afford to take a bit more time to write strongly-typed code, e.g. HtmlHelpers for controller actions. Try this one:

Screenshot

You can make your code infer the controller name, the action name, the area and all sorts of things without ever having a magic string. You could even add strongly-typed parameters to it (built using a fluent interface or via tiny types) so that it’s effectively impossible to get it wrong without the compiler complaining.

So why don’t more people use such a great convention-based test tool? I have no idea.

Let’s make old-school cool again.


iPhone/MonoTouch unit testing with Team Foundation Server

26 September 2011 00:00 AEST
.NET C# MonoTouch Team Foundation Server Continuous delivery

I was recently involved in building another iPhone application for an enterprise customer. We had previously dealt with that customer and had a good working relationship and level of trust with them, but a huge part of that trust was the visibility that we provided as to what was going on. It’s mostly off-topic for this post, but the way we did it involved using Team Foundation Server, giving the client’s people accounts and making sure that the client’s product owner could log in at any time and see how the project was going.

With the iPhone application, we wanted to do exactly the same. The problem was that we hadn’t had that much experience using TFS to build iPhone apps. Most of our collective efforts had either been in Objective C using git (the stock-standard approach that Xcode pushes) or C#/MonoTouch using Mercurial (my experience). While both of those approaches are fine for personal and small projects, we really wanted all the other bonus points that TFS provides (continuous integration, work item tracking, reporting, web-based project portal etc.)

So how’d we do it? Well, the first thing to note is that we’re not actually building the application bundle using TFS - yet. That still requires a MacBook, MonoDevelop and a bunch of other stuff. We’ll probably get there soon using custom build tasks, rsync, ssh and a few other things, but we’re not quite there yet.

What we do have is a working continuous integration and nightly build, plus running tests using MSTest. The nice thing is that it actually wasn’t that hard.

  1. Open the project in Visual Studio (not MonoDevelop). You’ll probably need something like Chris Small’s MonoTouch Project Converter to make this work happily.
  2. Include monotouch.dll in your /lib directory and reference it from there rather than from the GAC. (It won’t be in the GAC on your build server, and nor should it be.)
  3. If you have other dependencies (e.g. System.Data), copy those from your MacBook into /lib as well and reference those from there.
  4. Done :)

The key point to note when you’re building your app is that you’re not going to be able to easily test your ViewController classes using MSTest, so make them dumb. If there’s business logic in there, extract it out into your domain model. If there’s data access logic in there… well… you’re doing it wrong anyway and you should definitely extract that out :)

You’ll end up with an app that has a dumb(ish) UI shell wrapped around a bunch of well-tested business logic classes. The added bonus of doing it this way is that you can then re-use a lot of that code when you write your WP7 or Android version.

The outcome? The visibility we wanted from the reporting and work item tracking side of things, plus a CI build that didn’t require witchcraft to configure, plus automated unit tests.


An iPhone Eye for the C# Guy at DDD Brisbane

27 August 2011 00:00 AEST
.NET C# MonoTouch

I just submitted this abstract for DDD Brisbane 2011. Don’t forget to vote for me!

An iPhone Eye for the C# Guy

iPhone Development using MonoTouch

This session will cover the basics of developing an iPhone application using C#/MonoTouch, from how to create a “Hello, world!” app through to a look at a real-world, production codebase.

We’ll cover the use of web services, threads, databases, generics (yes, you can use generics), reflection, inversion of control (yes, you can use IoC, too!) and general application architecture, and finish with a look at some tools, tips and tricks to make life as an iPhone developer much less painful.

This session will assume prior knowledge of threading, reflection, generics, inversion of control and why you’d want to use all of these, but don’t let that scare you :)


Cargo Cult Software

26 April 2011 00:00 AEST
coding-for-fun-and-profit

Ever heard of a cargo cult? It’s a term describing the philosophy of many pre-industrial tribes in the Pacific during World War II with respect to the “cargo”; i.e. the foodstuffs and equipment called in by American and Japanese radio operators. The thinking went that the Americans and Japanese had appropriated (stolen) the cargo that rightfully belonged to the natives by means of liaising with the gods.

The interesting thing isn’t the belief that the cargo was stolen but more that the means of stealing it back were novel. The natives began to imitate the radio operators in the hope that building the same items of worship (realistic-looking radio sets, landing strips and in some cases even mock aircraft), and memorising the noises that the operators made and faithfully reproducing them, would help them divert the cargo back to its rightful recipients.

Go on. Laugh. Laugh rudely and insensitively at the primitive natives. How could they be so naïve?

So what on earth does this have to do with software? Well, take a look around at most software projects you’ve been involved in. Why do we have multiple web service layers? Why do we separate concerns? Why do we abstract our persistence? Why even is a microservice?

There are legitimate (and good) answers to these questions, but the most common one is “because everyone else does it.” In other words, other people do these things and receive good software (“cargo”) in return, so if we do it then perhaps we’ll receive the cargo instead. The key is that some people understand why they do it, and some people just mimic it.

This post was prompted by a recent experience in which I ended up tearing apart an entire application (data access layer, service layer to access data, business logic layer, view/presenter layer etc.) and rebuilding it with something sensible - and all because someone tried to do it right and sadly had no idea how to do it.

I’ve seen so many software projects started with the absolute best of intentions. In most cases I honestly can’t fault the effort or the diligence displayed by the original developers - after all, almost nobody deliberately sets out to do a poor job. It’s heartbreaking, for example, to see someone having spent weeks or even months of their life inserting a web service layer for data access without understanding why they’re doing it, then complaining that their app’s too slow because every action ends up requiring a full table to be fetched and serialized over a SOAP call. Equally, I’ve seen people follow the Single Responsibility Principle to a ridiculous extent but end up with utterly unmaintainable code because they didn’t properly understand why it’s important. MVC, MVVM, n-tier… they’re all useful tools in the box, but all too often they’re just fake radio sets made of wood or landing lights that don’t emit light.

Having earlier laughed rudely at the primitive natives, let’s now cry. Cry, because you know that, at least once, each of us has been guilty of the same lack of understanding. Each of us has done something “because it’s best practice” and each of us has subsequently realised our own error.

Finally, to stand on my fake soapbox for a second: go and teach your craft, so that other people don’t have to make the errors in the first place, and so that you don’t have to clean up messes like these and destroy someone’s prized wooden radio set while you’re at it.

P.S. Credit to my friend and colleague, Paul Stovell, whose tweet finally prompted me to finish and publish this blog post.


Why merely "very good" employees don’t get promoted

21 April 2011 00:00 AEST
Software engineering Career advice

I saw a question on Slashdot this morning: Promotion Or Job Change: Which Is the Best Way To Advance In IT?. I decided to blog it rather than comment as it’s another one of those “I hear this question all the time” posts.

The question’s usually along the lines of, “I’m really good at my job. How come I can’t get promoted?”

The simple-yet-offensive answer is this: you haven’t done a good enough job to be promoted.

Before everyone starts screaming, let’s look at this from the perspective of a mythical manager who actually wants his/her employees to succeed. I know, I know, but they really are out there. So… what goes on?

Usually the internal monologue goes something like this:

  1. John’s doing a really good job running X/Y/X.
  2. I like John and want to promote him.
  3. There’s a position running a new project and I’d really like to offer it to him.
  4. I have nobody to replace him with and would need to train a replacement.

Oops. John’s just made it impossible for even the best-intentioned manager to promote him. Now imagine what a less-well-intentioned one would do.

At any one of these stages a promotion is easily killable. If John isn’t doing a really good job running X/Y/ then there’s no way that he’s going to be promoted except under the Dilbert Principle. Similarly, if John isn’t well-liked then there’s no way he’s going to get promoted simply because dealing with unpleasant people is unpleasant and a good manager isn’t going to inflict an unpleasant person on other people if he/she can at all avoid it. The third one’s obvious: if there’s no promotion available then there’s no promotion available.

The fourth point, though, is the one that almost invariably gets forgotten. I’ve blogged about this before but the gist of it is that if John’s made himself indispensible then that’s the end of it. Indispensible means that he cannot be done without; in other words, he’s locked himself into his current position all by himself. John hasn’t done a good enough job to get promoted because he hasn’t trained his own replacement.

Training your replacement is part of your job.

So why don’t people do it?

The most obvious answer is fear. Fear of being replaced by someone younger and cheaper; fear of being shown up by someone who ends up knowing more than you; fear that your management won’t see this as a valuable use of your time. There’s also fear of the unknown - it’s much easier in many cases to perennially gripe about being really good but not getting promoted than it is to actually get promoted and run the risk of failing in your new position.

There are other likely culprits (budgetary constraints, time pressure etc.) but the point of this article is that they’re all surmountable once the fear is overcome.

I want a promotion. Should I stay in my current company or look elsewhere?

The short answer to this is: go wherever your interests take you.

The longer answer: if you like your company and want a particular position, ask for it. It’s much easier to just tell people what you want and ask what you need to do in order to get it. You never know - they might not have realised that you’re bored or unhappy where you are, especially if you’re keeping things running smoothly and appear to have it all under control.

If your current company can’t or won’t accommodate you then, by all means, look elsewhere. Before doing that, though, take a good, hard, honest look at yourself and ask whether you’d promote you if you ran the company. If the answer is yes but the company won’t then it’s time to move on. If the answer is no then it’s probably time to ask for help - and also to start making some serious efforts towards your own professional development.

Finally…

Finally, if your company’s regularly appearing on FC or NGE then it’s time to jump ship no matter how good you are :)


Fix what you know is broken

19 April 2011 00:00 AEST
Software engineering

As a consultant, there’s a very common complaint that I hear from clients. The complaint is along the lines of, “It’s all such a mess,” or “We need to re-write it from scratch.” They’re almost always right in the first case, and almost invariably mistaken in the second. A messy codebase is a pain, but learning the wrong lesson from it just means that they’re going to experience the same pain all over again once they’ve done their re-write - if they’re still in business when they finish.

The first question to ask is simple: why is it all such a mess? If it’s a mess because you made a completely wrong technology choice (e.g. classic ASP for a point-of-sale application, or a thick client where a web client was required) or the team that wrote it simply didn’t have a clue and have all been fired found opportunities elsewhere, then perhaps a re-write is in order. Other than that, there’s almost no good reason to do a complete re-write. Regardless, that’s not the point of this post.

The point of this post is that it’s usually such a mess because people don’t know how to fix it - or, more probably, people don’t know how to even decide on a strategy to follow and are drowning in technical debt as a result. Here’s a simple one:

Fix what you know is broken.

If you honestly have no idea where to start, ask for help. Plenty of people will. Firstly, though, try this:

  • Do you have source control? No? Then download Git and fix that.
  • Do you have continuous integration? No? Then download TeamCity and fix that.
  • Do you have unit tests? No? Then go and write at least a “Hello, world!” test to get yourself started.
  • Do you have an issue-tracking system? No? Then try YouTrack, JIRA or similar. There are countless. The main thing is to start writing problems down. Use sticky notes if you have to.
  • Do you have an automated deployment solution? No? Octopus is your friend.
  • Do you have log aggregation? If not, Seq will make a world of difference to you.
  • Are you afraid to change the code? Well… work out at least one reason why, and fix it.

There’s really no excuse for not having these sorts of tools. Moreover, there’s no excuse for not having the agility that these sorts of tools offer.

Once you have a build, a rudimentary test suite and a deployment solution, the next step is clear:

Fix what you know is broken.

What’s at the top of your issue-tracking list? Does it make sense? If so, then that’s what’s broken. Go and fix it. If not, then its priority is what’s broken. Fix it by re-prioritising it so that it does make sense.

I visited one client recently that had a test automation task as a “drop everything and fix now” priority - but below that were cases that were costing parts of their business money every single day. In this case, the prioritisation was broken. So… fix it and move on.

Once you’ve fixed something that was broken, release it. That’s right: release the thing. “Oh,” you might say, “but it has to go through n levels of QA, UAT and sign-off first.” Guess what: that’s the next thing that’s broken. So… given that you know what’s broken, what now?

Fix what you know is broken.

I’m starting to sound like a broken record here, but I’m also starting to sound like a broken record whenever I have to deliver this lesson in person :)

You need to get your release cycles down to something manageable, and if you’ve had a messy codebase for a while then I guarantee that you’re afraid of releasing to production because of what might have changed while you weren’t looking.

The solution is to start releasing earlier and more often. Get used to the idea that a production release is boring and routine, not unfamiliar and scary. Releasing to production should be scripted and ideally entirely automated (but that’s the subject of a squillion other blog posts) so I’m not going to re-hash it here. Just accept that if you’re afraid of releasing to production then that’s the next thing that’s broken. After all, if you haven’t changed the code since your last release then what’s likely to go wrong? If you have changed the code, then what you’re really afraid of is your testing regime, not deployment per se.

Once you have your releases automated and none of the above things are scary any more, you’re down to the boring, menial task of just chipping away at your technical debt. Identify the highest-priority item to fix; fix it; release it.

Can’t find the actual bug but have found another one nearby? Fix that bug. Then ship it. To production. And watch the logs. You’ll be surprised by how many times you discover that they’re related - or, at least, that fixing one bug makes the other one easier to find.

If you’re struggling to pick the highest-priority item to fix, here’s a tip: if it’s hard to choose then it probably doesn’t matter which one you pick. If you’re struggling to pick then they’re close enough to equal that it doesn’t matter. It’s more important to pick one and start work.

It really is that simple, ladies and gentlemen :)


If you’re so smart, why does all your code look simple?

4 July 2010 00:00 AEST
Software engineering

“Oh, so that’s all. That’s really easy.”

“Really? That’s all it does?”

“So where’s the hard part? I understood that straight away when I looked at it.”

Sometimes these questions are even asked of me, which is flattering ;) The flippant answer, of course, is “So that any idiot can come along and understand it.” Disappointing though it might be, this answer is also quite correct.[1]

If you only ever take one piece of advice from this blog, take this one: Code yourself out of your job.

Coding yourself out of a job doesn’t mean you’re going to get fired when you’re done.[2] Coding yourself out of a job means that you don’t end up responsible for the same chunk of a project forever, always fixing bugs in it because it’s too complicated, too involved or just plain scary for newcomers.

There’s an art to having someone look at your code and understand it at a glance. It’s a subtle display of skill, the result of which being that the person reading your code doesn’t even realise that you got inside their head before they ever arrived, understood what they’d be thinking when they looked at it and gave them subtle cues as to which parts of the code were the ones they were looking for.

There’s also a corresponding amount of confidence required in order to be able to choose a less optimal solution in terms of performance for the sake of comprehension. Why confidence? Because as part of having your code look simple, you have to be willing to have someone stroll along after the fact and ask within two seconds of looking at your solution, “Why did you choose this way? This other way’s almost twice as fast…” That, my friends, is the whole point: they understood it within two seconds of looking at it.

One “I don’t understand” question about a piece of code and unless it’s a really, really fast and effective algorithm and the alternatives are awful, it’s already cost more in real terms than its fast performance has saved.

Obviously in heavily-hit code paths this would not a good choice, but if I have a method that only gets executed a few times per second then I honestly don’t care whether it takes 0.01s or 0.001s to execute. If there’s no other real difference between the two then of course I’m going to pick the faster one, but if the screamingly-fast one is horrendously complicated and I’m going to be the one who’ll have to come back to it every time it needs modifying, then I’ll take the merely-quite-fast one, thank you very much.

At the point where someone asks why you made a sub-optimal performance decision, you have to be sufficiently confident in your own ability to explain to them that yes, you could have done it another way, but then it’d take longer for every single person who came after you to understand what was going on, you’d have to field more questions about it, and that it actually didn’t matter[3]. You also have to be sufficiently confident in yourself to evaluate their solution and confess on occasion, “Actually, I really do like your solution and it’s better than mine. Let’s do it your way instead.”

In a nutshell, the optimal solution is not necessarily the fastest, but the most effective in terms of the long-term maintenance of the application.

Of course, the really smart ones amongst us can write code that’s simple, obvious and extremely fast. And we’re all that good, right? ;)

[1] Especially when that idiot might be yourself six months later…

[2] Unless you work for an unwise company, in which case you might consider leaving very soon anyway.

[3] Be correct about this, though. If it matters, do it better.


Generic solution for testing flag enums in C#

27 February 2010 00:00 AEST
.NET C#

This is one that’s irritated me just a little for ages, but never as much as this morning, when I needed to create a whole swag of small-ish flags enums and then test for bits set in them.

Here’s a quick solution:

public static class FlagExtensions
{
    public static bool HasFlag<T>(this T target, T flag) where T: struct
    {
        if (!typeof(T).IsEnum) throw new InvalidOperationException("Only supported for enum types.");

        int targetInt = Convert.ToInt32(target);
        int flagInt = Convert.ToInt32(flag);

        return (targetInt & flagInt) == flagInt;
    }
}

Notably, while we can’t use a where T: Enum constraint in our extension method, an enum (lower-case e) is actually a struct, so we can at least constrain it to structs and then do a quick type-check.

To use it to test whether an enum value has a particular flag set, try this:

MyFlagEnum flags = MyFlagEnum.Default;

if (flags.HasFlag<MyFlagEnum>(MyFlagEnum.Coffee))
{
    // our "Coffee" bit is set. Yum!
}

On corporate blogging: corporate culture and personal ethics

13 October 2009 00:00 AEST
Software engineering Meta

Let me start by demonstrating just a tiny amount of corporate cynicism.

“Blogging is so Web two-point-oh.”

“Oh, but everyone has a blog.”

“If you’re not on, like, Twitter an’ Facebook ‘n’ stuff, then you’re like, a dinosaur or IBM or something.”

Let’s now all pause for a moment to welcome those companies that aren’t, strictly speaking, technological leaders any more, to the blogosphere. Congratulations, you’ve finally arrived. But what are we doing, here? What, when you get down to it, is corporate blogging all about? What should it be about? Why should a company do it? And when should it not?

A company should blog, first and foremost, when it actually has something to contribute to the world at large. Rambling personal blogs are fine for… well… personal purposes, but corporate blogs should be useful to current and potential customers, partners, shareholders and so on. They should also allow people within the company who normally wouldn’t be at the front line of public relations to provide their perspective on things - otherwise, why have a corporate blog at all? Why not just have a series of press releases from your marketing department?

Having people within your company contribute to your corporate blog can be a two-edged sword. Unfortunately, it’s one upon which many companies cut themselves. The pitfall should be obvious, but one damaging strategy that many companies employ is to sanitise or censor what their people say. I’m not arguing that you should permit people to blather your corporate secrets all over the web; merely that you should hire good people and then trust them to do their job well.

If you have a trusted, valued employee who’s worked on your business-critical systems or in client-facing roles where they’ve already represented the company, you should trust in their professionalism and let them post what they deem appropriate.

If you have an irresponsible blogger who’s publishing commercial-in-confidence information to the web at large, then you don’t just have an irresponsible blogger. You have an irresponsible employee, who should be swiftly counselled or transformed into a former employee.

To companies

When asking your employees to blog on your behalf, you need to give them some guidelines about what they should and should not say, and you should also give them some benefits from doing so. I suggest these as a minimum:

  1. We won’t edit your posts. Period. We reserve the right to pull them from the site if absolutely necessary, after which we’ll explain to you why we did so, but we will never edit anything that’s been published under your name. Period.
  2. We won’t require a pre-approval process. Post as you will. The corollary to this: be responsible with what you publish, because you will be held responsible for what you publish. We expect - and trust - you to do the right thing.
  3. It’s infeasible to track whether it was personal or company time on which you wrote a blog post, so it must be accepted that if you publish a post on our corporate blog then we own the content.
  4. We grant you a perpetual and irrevocable (but non-exclusive) licence to use the content that you generate for our blog for your own purposes, including but not limited to syndication to your own personal blog, re-publishing as journal articles, or any other purpose you wish - specifically including for personal profit - provided that such use does not harm the company’s reputation.

If you don’t offer at least these guarantees, then your blogging-inclined employees will simply go off and publish their own content anyway, and you’ll derive no benefit from their efforts. Moreover, you’ll have demonstrated a lack of trust in your employees that may well cause them to take their talents, not just their blogs, elsewhere.

If you can’t offer at least these guarantees, then I respectfully suggest that a corporate blog is not for you.

To individuals

Carefully consider your personal reputation. If you haven’t one, consider what you’d like it to be in a year or so’s time. If you have one, consider whether the opportunity to blog for your company is more likely to enhance or damage it.

Consider whether you’re likely to be permitted to express your real opinion on a matter, or whether you’re going to be asked to be a corporate mouthpiece. If the former, great. Thank your employer and do the right thing by them. If the latter, I suggest that you respectfully and courteously decline the invitation to contribute.

Never lie. You needn’t (and shouldn’t) air your corporate dirty linen in public - everyone has it, and nobody appreciates seeing someone else’s - but your personal integrity is yours to defend. If you disagree with a tactical or strategic decision made by your company, simply don’t write about it. If you think Joe from Accounts is a cretin, keep it to yourself. Write about your area of expertise or influence; ask for feedback from your audience; allow your readers to understand that your company is thinking about their issues and exploring ways and means of helping its stakeholders.

Remember, if you blog as a corporate automaton, your readers will see straight through it, and these are the people who won’t be hiring you at your next job interview - or whom you’ll be trying to hire yourself. Presumably your peers are the ones you want to respect and admire you.

In general

I was recently asked to contribute to a corporate blog. When I responded asking for the guarantees suggested above, I was surprised to learn that nobody had actually considered it yet. On reflection, that shouldn’t have been much of a surprise - blogging is, after all, a relatively new practice in corporate land.

The lessons to be learned from that, I guess, are to not be afraid to ask, and not be afraid to decline the opportunity if it isn’t appropriate for you.

Will you see my by-line on a corporate blog any time soon? We’ll have to wait and see :)


Worth Reading: Automate, else Enforce otherwise Path of Least Resistance

7 August 2009 00:00 AEST
Software engineering

A friend and colleague, Matthew Rowan, just published this article: Automate, else Enforce otherwise Path of Least Resistance.

This is well worth a read as far too many companies get burdened with process documentation over actual workable process.


Automatically rejecting appointments in Microsoft Outlook 2007

1 August 2009 00:00 AEST
coding-for-fun-and-profit

There’s a nice feature in Outlook that allows users to automatically accept appointments, and even decline conflicting appointments. Unfortunately, what it can’t do is allow you to specify specific reasons for rejecting meeting invitations.

Screenshot

A particular pet hate of mine is when people send a meeting invitation entitled “Foo Discussions” or some such, and fail to specify a location or any content. It’s even more irritating when I’m trying to be a good little corporate citizen and have my calendar auto-accept appointments, but they send it ten minutes before the thing actually starts. They’re going to receive an acceptance notice (of course) but my phone’s not going to synch for a good half-hour, and there’s just no way I’m going to be there. Funnily enough, I’m not just sitting around on my backside, waiting for someone to invite me to a meeting.

Oh, a meeting! How exciting! I’ve been waiting for one of these all day!

Of course, if you simply decline offending appointments manually, people tend to get offended. (Which may or may not be a good thing, depending on who it is.) A better way, however, is to automate the process.

Nothing personal, old chap - my calendar just has automation rules that apply to everyone.

The rules for getting into my calendar are simple:

  1. Tell me everything I need to know about the meeting. This includes, specifically, its location. Outlook enforces pretty much everything else, but fails to enforce this one.

  2. Please do me the courtesy of checking my free/busy information and *do not *attempt to trump something that’s already been organised. It shows a complete and utter disregard for my time and that of anyone with whom I’ve already agreed to meet.

  3. Do me the courtesy of giving me at least 24 hours’ notice. Don’t send me a meeting request at 7pm on Monday evening for 7:30am on Tuesday morning. I’m not going to read it, and I’m not going to be there.

I finally snapped today, after another imbecilic meeting request, and wrote these two quick methods. They enforce the three rules above, automatically accept the request if it passes and automatically decline otherwise. They appear to work for me; your mileage may vary. No warranties, express or implied, etc.

Sub AutoProcessMeetingRequest(oRequest As MeetingItem)

	' bail if this isn't a meeting request
	If oRequest.MessageClass <> "IPM.Schedule.Meeting.Request" Then Exit Sub

	Dim oAppt As AppointmentItem
	Set oAppt = oRequest.GetAssociatedAppointment(True)

	Dim declinedReasons As String
	declinedReasons = ""

	If (oAppt.Location = "") Then
		declinedReasons = declinedReasons & " * No location specified." & vbCrLf
	End If

	If (HasConflicts(oAppt)) Then
		declinedReasons = declinedReasons & " * It conflicts with an existing appointment." & vbCrLf
	End If

	If (DateTime.DateDiff("h", DateTime.Now, oAppt.Start) < 24) Then
		declinedReasons = declinedReasons & " * The meeting's start time is too close to the current time. " & vbCrLf
	End If

	Dim oResponse As MeetingItem
	If (declinedReasons = "") Then
		Set oResponse = oAppt.Respond(olMeetingAccepted, True)
	Else
		Set oResponse = oAppt.Respond(olMeetingDeclined, True)
		oResponse.Body = _
			"This meeting request has been automatically declined for the following reasons:" & vbCrLf & _
			declinedReasons
	End If

	oResponse.Send
	oRequest.Delete

End Sub

Function HasConflicts(oAppt As AppointmentItem) As Boolean
	Dim oCalendarFolder As Folder
	Set oCalendarFolder = ThisOutlookSession.Session.GetDefaultFolder(olFolderCalendar)

	Dim apptItem As AppointmentItem

	For Each apptItem In oCalendarFolder.Items
		If ((apptItem.BusyStatus <> olFree) And (oAppt <> apptItem)) Then
			If (apptItem.Start < oAppt.End) Then
				' if this item starts before the given item ends, it must end before the given item starts
				If (apptItem.End > oAppt.Start) Then
					HasConflicts = True
					Exit Function
				End If
			End If
		End If
	Next

	HasConflicts = False
End Function

Just open the VBA editor from within Outlook (Alt-F11) and paste the subroutines into the ThisOutlookSession project.

Screenshot

Then go and create an Outlook rule that calls the AutoProcessMeetingRequest subroutine for every meeting request you receive:

Screenshot

Those of your colleagues who persistently refuse to learn how to use email (an essential business tool!) will receive responses along the following lines:

Screenshot


Memory leak detector for Internet Explorer

9 July 2009 00:00 AEST
Internet Explorer Debugging JavaScript

I’ve been playing with Drip and sIEve in order to find some memory leaks that we’ve been encountering under Internet Exploder.

Drip / IESieve, Memory leak detector for IE Internet Explorer.

If you haven’t looked at your application with sIEve, you might find it useful.


console.log() equivalent for Internet Explorer

12 March 2009 00:00 AEST
Internet Explorer Debugging JavaScript

At the time of writing, IE (versions 7 and below) does not have a JavaScript console supporting console.log.

(Request to the IE8 team: Your product is still in beta. You must have a logging call somewhere. Publish it, please. Please.)

There are quite a few good console.log() equivalents out there, not the least of which are Faux Console and the Yahoo User Interface Logger Widget. For extremely light-weight applications, though, there was nothing that did just what I wanted, so I wrote one.

The JavaScript code:

// rudimentary javascript logging to emulate console.log(). If there
// already exists an object named "console" (defined by most *useful*
// browsers :p) then we won't do anything here at all.
if (typeof (console) === 'undefined') {

    // define "console" namespace
    console = new function() {
        // this is the Id of the console div. It doesn't actually need
        // to be a div, as long as it has an innerHTML property.
        this.ConsoleDivId = "JavaScriptConsole";

        // maintains a reference to the console output div, so that we
        // don't have to call document.getElementById a bunch of times.
        this.ConsoleDiv = null;

        // allows us to cache whether or not the console div exists, so
        // that we can just do an early exit from the console.log method
        // and similar if we're not going to put any useful output anywhere.
        this.ConsoleDivExists = null;
    };

    // this is an expensive (really quite expensive) string padding function.
    // Don't use it for large strings.
    console.padString = function(s, padToLength, padCharacter) {
        var response = "" + s;
        while (response.length < padToLength) {
            response = padCharacter + response;
        }

        return response;
    }

    console.log = function(message) {

        // this will be executed once, on first method invocation, to
        // get a reference to the output div if it exists
        if (console.ConsoleDivExists == null) {
            console.ConsoleDiv = document.getElementById(console.ConsoleDivId);
            console.ConsoleDivExists = (console.ConsoleDiv != null);
        }

        // only do any logging if we actually have an output div.
        // (Check using the cached variable so that we don't end up
        // with a bunch of failed calls to document.getElementById).
        if (console.ConsoleDivExists) {
            var date = new Date();
            var entireMessage =
                console.padString(date.getHours(), 2, "0") + ":" +
                console.padString(date.getMinutes(), 2, "0") + ":" +
                console.padString(date.getSeconds(), 2, "0") + "." +
                console.padString(date.getMilliseconds(), 3, "0") + " " + message;
            delete date;

            // append the message
            console.ConsoleDiv.innerHTML = console.ConsoleDiv.innerHTML + "<br />" + entireMessage;

            // scroll the div to the bottom
            console.ConsoleDiv.scrollTop = console.ConsoleDiv.scrollHeight;
        }
    }
}

Ideally you’d drop this into an included script file, but it’s more likely that you’ll paste it into a <script> tag in the header of your HTML document.

The HTML that creates the DIV to contain the output:

<div id="JavaScriptConsole" style="position: absolute; bottom: 30px; left: 30px; width: 600px; height: 200px; overflow: scroll; background-color: Yellow; color: Red;">
    <a href="javascript:document.getElementById('JavascriptConsole').style.visibility = 'hidden';" style="float: right;">Close</a> <span style="font-weight: bold;">JavaScript Console</span><br />
</div>

Note that this div also contains a hyperlink with JavaScript code in it to hide it.

A simple hello world script to log to it:

<script type="text/javascript">
    console.log("Hello, world!");
</script>

… and finally, the output:

Screenshot


#if DEBUG Considered Harmful

16 January 2009 00:00 AEST
.NET C#

Several of people have written about this one, but it still gets used and I feel I should add my $0.02. (That’s Australian money, by the way, so it probably works out at not very much in your own currency.)

This post is specific to C#, as .NET has the ConditionalAttribute class which allows methods to be compiled and invoked only if there’s a particular compilation variable set.

Consider the code below:

private static void Hello()
{
    Console.WriteLine("Hello, world!");
}

private static void Goodbye()
{
    Console.WriteLine("Goodbye, cruel world!");
}

public static void GreetTheWorld()
{
    #if DEBUG
    Hello();
    #endif

    Goodbye();
}

Let’s say that we compile this in Debug mode with code analysis turned on and warnings set to errors. (We all compile with warnings == errors, don’t we?) All is well.

We go to run our unit tests again in Release mode prior to check-in, so we recompile in Release mode. (Or, if we’re lazy, we just check in from our Debug build and let our build server compile and run the tests in Release mode.)

Oops. CA1811 violation: you have uncalled private methods in your code. Please call them if you meant to call them, or remove them if not. The FxCop engine will never notice that our #if DEBUG directive has compiled out the call to our Hello() method, so code analysis throws an error.

Use this one instead:

[Conditional("DEBUG")]
private static void Hello()
{
    Console.WriteLine("Hello, world!");
}

private static void Goodbye()
{
    Console.WriteLine("Goodbye, cruel world!");
}

public static void GreetTheWorld()
{
    Hello();
    Goodbye();
}

This makes the code analysis tooling much happier.

Let’s consider the first piece of code again, though, and edit it in Release mode. Perhaps we’d like to rename our methods to something more descriptive of what they do: PrintHello() and PrintGoodbye(). So, we whip out our trusty refactoring tool (^R ^R in Visual Studio) and tell it to rename our methods.

Here’s what we end up with (remembering that we’re in Release mode):

private static void PrintHello()
{
    Console.WriteLine("Hello, world!");
}

private static void PrintGoodbye()
{
    Console.WriteLine("Goodbye, cruel world!");
}

public static void GreetTheWorld()
{
    #if DEBUG
    Hello();
    #endif

    PrintGoodbye();
}

Oops. We’ve introduced a compilation error because the refactor/rename operation uses the compiled version of the code to check for symbol usage, and our call to the former Hello() method doesn’t appear in the compiled assembly because the #if DEBUG check caused it to not be compiled. We’ve left the old call to Hello() unchanged.

If we’d performed the same operation on the second piece of code instead, we’d be fine.


Brisbane Alt.Net User Group Launched

14 January 2009 00:00 AEST
.NET C# Community Brisbane

The Brisbane Alt.Net User Group has launched. Check it out at Brisbane Alt.Net or, even better, turn up to the first meeting in February.


Braces in string.Format()

2 December 2008 00:00 AEST
.NET C#

I’m surprised that I’ve never needed this before, but today I wanted to embed some JavaScript within a string contained in a C# class and format it using string.Format(...).

The problem? My JavaScript was a function declaration and therefore contained braces, but the placeholder delimiters in string.Format(...) also use braces.

The solution: braces get escaped using another brace of the same type.

var jsConditionalHelloWorldTemplate =
    "if ({0}) {{" + Environment.NewLine +
    "    alert('Hello, world!');" + Environment.NewLine +
    "}}" + Environment.NewLine +
    "";

var sendToBrowser = string.Format(jsConditionalHelloWorldTemplate, "true"); 
writer.Write(sendToBrowser);

Code Snippet for .NET Provider

25 November 2008 00:00 AEST
.NET C# Dependency inversion Visual Studio

While I think the .NET provider model is useful as a means of introducing dependency inversion if you really don’t want a container (?), it’s a bit of work to create so many peripheral classes in order to use it.

For example, we need to create a strongly-typed collection class that contains them all (presumably a left-over from the .NET 1.x days where there were no generic types), we need a configuration section class just to support an addition to the [web|app].config file, we need the provider class itself (effectively a factory class) and we need the class(es) of which it provides instances. Oh, and the interface that our provider stuff actually provides.

Here’s a code snippet (what’s a code snippet?) for creating a .NET provider and all the associated paraphernalia. It creates all the classes into one .cs file so you’ll need to use your favourite refactoring tool to extract them as appropriate. The snippet will also generate XML for you that can be copied/pasted directly into your [web|app].config file.

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <CodeSnippet Format="1.0.0">
    <Header>
      <Title>provider</Title>
      <Shortcut>provider</Shortcut>
      <Description>Code snippet for a .NET Provider implementation</Description>
      <Author>Andrew Harcourt</Author>
      <SnippetTypes>
        <SnippetType>Expansion</SnippetType>
      </SnippetTypes>
    </Header>
    <Snippet>
      <Declarations>
        <Literal>
          <ID>providerName</ID>
          <ToolTip>The name of the provider (e.g. "Cache", "Licence").</ToolTip>
          <Default>Stupid</Default>
        </Literal>
        <Literal>
          <ID>interfaceName</ID>
            <ToolTip>The name of the interface that the provider will return (e.g. "ICache", "ILicence").</ToolTip>
            <Default>IStupid</Default>
        </Literal>
        <Literal>
          <ID>defaultProvider</ID>
          <ToolTip>The name of the default provider instance to use (e.g. "Web", "File"). The suffix "$providerName$Provider" will be added automatically.</ToolTip>
          <Default>Default</Default>
        </Literal>

        <Literal Editable="false">
          <ID>className</ID>
          <ToolTip>The type of the owning class.</ToolTip>
          <Function>ClassName()</Function>
          <Default>StupidClass</Default>
        </Literal>

      </Declarations>
      <Code Language="csharp">
        <![CDATA[using System;
using System.Collections.Specialized;
using System.Configuration;
using System.Configuration.Provider;
using System.Diagnostics;
using System.Reflection;
using System.Web.Configuration;

#region $providerName$Provider

    [Serializable]
    public abstract class $providerName$Provider : ProviderBase
    {

        protected abstract string DefaultName { get; }
        protected abstract string DefaultDescription { get; }

        public abstract $interfaceName$ Get$providerName$();

        protected static void CheckForUnrecognizedAttributes(NameValueCollection config)
        {
            if (null == config)
            {
                throw new ArgumentNullException("config");
            }
            if (config.Count > 0)
            {
                string attr = config.GetKey(0);
                if (!string.IsNullOrEmpty(attr))
                {
                    throw new ProviderException("Unrecognized attribute: " + attr);
                }
            }
        }

        protected string VerifyInitParams(NameValueCollection config, string name)
        {
            if (null == config)
            {
                throw new ArgumentNullException("config");
            }

            if (string.IsNullOrEmpty(name))
            {
                name = DefaultName;
            }

            if (string.IsNullOrEmpty(config["description"]))
            {
                config.Remove("description");
                config.Add("description", DefaultDescription);
            }

            return name;
        }
    }

#endregion

#region $defaultProvider$$providerName$Provider

    public class $defaultProvider$$providerName$Provider : $providerName$$end$Provider  //TODO Implement abstract class "$providerName$$end$Provider"
    {
        //TODO Add or merge the following into your (web|app).config file.
        /*
        <configuration>
          <configSections>
            <section name="$providerName$ProviderService" type="FULL_NAMESPACE_HERE.$providerName$ProviderSection, ASSEMBLY_NAME_HERE" />
          </configSections>

          <$providerName$ProviderService defaultProvider="$defaultProvider$$providerName$Provider">
            <providers>
              <clear />
              <add name="$defaultProvider$$providerName$Provider" type="FULL_NAMESPACE_HERE.$defaultProvider$$providerName$Provider, ASSEMBLY_NAME_HERE" />
            </providers>
          </$providerName$ProviderService>

        </configuration>
        */
    }

#endregion

// The code below here is auto-generated and shouldn't need any manual
// editing unless you want to do interesting stuff.  -andrewh 18/9/08

#region $providerName$ProviderSection

    [Obfuscation(Feature = "renaming", Exclude = true, ApplyToMembers = false)]
    public class $providerName$ProviderSection : ConfigurationSection
    {
        [ConfigurationProperty("providers")]
        public ProviderSettingsCollection Providers
        {
            get { return (ProviderSettingsCollection)base["providers"]; }
        }

        [StringValidator(MinLength = 1)]
        [ConfigurationProperty("defaultProvider", DefaultValue = "$defaultProvider$$providerName$Provider")]
        public string DefaultProvider
        {
            get { return (string)base["defaultProvider"]; }
            set { base["defaultProvider"] = value; }
        }
    }

#endregion

#region $providerName$ProviderService

    [Serializable]
    public class $providerName$ProviderService
    {
        private static $interfaceName$ _instance;
        private static $providerName$Provider _provider;
        private static $providerName$ProviderCollection _providers;
        private static object _lock = new object();

        public static $providerName$Provider Provider
        {
            get { return _provider; }
        }

        public static $providerName$ProviderCollection Providers
        {
            get {
              LoadProviders();
              return _providers;
            }
        }

        public static $interfaceName$ $providerName$
        {
            get
            {
                if (_instance == null)
                {
                    _instance = LoadInstance();
                }

                return _instance;
            }
        }

        private static $interfaceName$ LoadInstance()
        {
            LoadProviders();
            $interfaceName$ instance = _provider.Get$providerName$();

            // if the default provider fails, try the others
            if (instance == null)
            {
                foreach ($providerName$Provider p in _providers)
                {
                    if (p != _provider) // don't retry the default one
                    {
                        instance = p.Get$providerName$();
                        if (instance != null) // success?
                        {
                            _provider = p;
                            break;
                        }
                    }
                }
            }

            Debug.Assert(instance != null);
            return instance;
        }

        private static void LoadProviders()
        {
            if (null == _provider)
            {
                lock (_lock)
                {
                    // do this again to make sure _provider is still null
                    if (null == _provider)
                    {
                        $providerName$ProviderSection section = LoadAndVerifyProviderSection();
                        BuildProviderCollection(section);
                    }
                }
            }
        }

        private static void BuildProviderCollection($providerName$ProviderSection section)
        {
            _providers = new $providerName$ProviderCollection();
            ProvidersHelper.InstantiateProviders(section.Providers, _providers, typeof($providerName$Provider));

            if (_providers.Count == 0)
            {
                throw new ProviderException("No providers instantiated");
            }

            _provider = _providers[section.DefaultProvider];
            if (null == _provider)
            {
                throw new ProviderException("Unable to load provider");
            }
        }

        private static $providerName$ProviderSection LoadAndVerifyProviderSection()
        {
            // fetch the section from the application's configuration file
            $providerName$ProviderSection section = ($providerName$ProviderSection)ConfigurationManager.GetSection("$providerName$ProviderService");
            if (section == null)
            {
                throw new ProviderException("$providerName$ProviderService section missing from (web|app).config");
            }

            return section;
        }
    }

#endregion

#region $providerName$ProviderCollection

    [Serializable]
    public class $providerName$ProviderCollection : ProviderCollection
    {
        public new $providerName$Provider this[string name]
        {
            get { return ($providerName$Provider)base[name]; }
        }

        public override void Add(ProviderBase provider)
        {
            if (null == provider)
            {
                throw new ArgumentNullException("provider");
            }

            if (!(provider is $providerName$Provider))
            {
                throw new ArgumentException("Invalid provider type", "provider");
            }

            base.Add(provider);
        }
    }

#endregion
]]>
      </Code>
    </Snippet>
  </CodeSnippet>
</CodeSnippets>

The Value of Check-In Policies

29 August 2008 00:00 AEST

Public service announcement: do not swear in a codebase. It will bite you. It will appear in a stack trace, an error log or, worse, in an actual UI.

The Daily WTF has an article that demonstrates, amongst other things, the value of precommit hooks: We burned the poop


JavaScript Code Re-Use in Microsoft CRM

25 July 2008 00:00 AEST
Dynamics CRM JavaScript

Microsoft’s CRM tool offers some pretty powerful JavaScript event hooks. One thing it doesn’t appear to offer, however, is a way to import a library of JS functions and re-use them across different event handlers.

For example, if one wanted to display a “Hello, world!” message whenever several different attributes were changed, the conventional approach would be to embed the call to alert() in each of the event handlers. Obviously, for such a simple example, this isn’t such a big deal, but for more sophisticated logic it becomes unwieldy very, very rapidly.

One common approach is to use externally-referenced script files. Great, but imagine the horror when you suddenly discover that your system administrator has been religiously backing up your CRM server for the last six years, but hasn’t backed up the web server from which you were serving your scripts… We still have the problem of how to reference them, too.

Variable declarations in JavaScript (evilly) default to global. What you can do to exploit this, however, is to declare a global function pointer from within an OnLoad event handler as follows:

// This is the OnLoad event handler provided by CRM
function OnLoad() {
    // This is the function that we want to make
    // available globally.
    window.helloWorldFunction = function() {
        alert("Hello, world!");
    }
}

Then, in your event handler for other controls on the page, you can re-use that global variable:

// This is the OnChange event handler provided by CRM
function OnChange() {
    // ... and here's the one we prepared earlier.
    helloWorldFunction();
}

Writing Good Unit Tests

4 July 2008 00:00 AEST
.NET Testing TDD

Why do we write automated tests?

  • Specify the correct behaviour of the system
  • Improve code quality
  • Fewer reported defects
  • Make checking code faster
  • Tell us when we’ve broken something
  • Tell us when our work is done
  • Allow others to check our code
  • Encourage modular design
  • Keep behaviour constant during refactoring

What do we test?

  • The law of diminishing returns applies here
  • Testing everything is infeasible. Don’t be unrealistic.
  • 70% code coverage is actually pretty decent for most codebases.
  • First, test the common stuff.
  • Then test the critical stuff.
  • Next, test the common exception-case stuff.
  • Add other tests as appropriate.

When do we write an automated test?

  • First :)
  • Use a unit test to provide a framework for writing your code.
  • If you find yourself running up an entire application more than once or twice to test a behaviour, wrap it in a unit test and use that test to invoke that behaviour directly.
  • When you receive a bug report, write a test to reproduce the bug.

Some principles for automated tests

  • Test code is first-class code.

    • Tests should be small and simple but treated as just as important as the code that actually performs the task at hand.
  • Each and every test must be able to be run in isolation

    • Tests should the environment up for themselves and clean up afterwards
    • Use your ClassInitialize, ClassCleanup, TestInitialize and TestCleanup attributes if you’re in MSTest-land, and the equivalents for NUnit, XUnit etc.
  • Tests should never rely on being executed in any particular order (that’s part of the meaning of “unit”)
  • Tests should not rely overmuch on their environment

    • Don’t depend on files’ being anywhere in particular on the filesystem. Use dynamically-derived, temporary paths if necessary.
    • Don’t hard-code paths.
  • If a class depends on another class that depends on another class that you can’t easily instantiate in your unit test, this suggests that your classes need refactoring.

    • Writing tests should be easy. If your classes make it hard, fix your classes first.
    • If fixing your class structures is difficult, consider writing a pinning test to enable refactoring, then fixing your class coupling, then removing that pinning test.
  • Tests should be cheap to write.

    • If writing a test is difficult, this suggests that the purpose of the code is unclear. Attempt to clarify the code’s purpose first.
    • Don’t worry about exception-handling.

      • If an unexpected exception is thrown, the test fails. Don’t bother catching it and manually asserting failure.
    • Don’t allow for variations in your output unless you absolutely have to.
    • If there are going to be different outputs, ideally there should be different tests.
  • Tests should be numerous and cheap to maintain.

    • Each test should test one behaviour.
    • It’s much better to have lots of small tests that check individual functionality rather than fewer, complex tests that test many things.
    • When a test breaks, we want to know exactly where the problem is, not just that there’s a problem somewhere in a call stack seven classes deep.
  • Tests should be disposable

    • When the code it tests is gone, the test should be dropped on the floor.
    • If it’s a simple, obvious test, it will be simple and obvious to identify when this should happen.
  • Tests need not be efficient.

    • Efficiency helps but correctness is key.
    • Optimise only when necessary.

Code Snippet for WPF Routed Event

13 June 2008 00:00 AEST
.NET WCF C#

This is a Visual Studio code snippet definition for a WPF routed event.

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <CodeSnippet Format="1.0.0">
    <Header>
      <Title>eventr</Title>
      <Shortcut>eventr</Shortcut>
      <Description>Code snippet for a WPF Routed Event</Description>
      <Author>Andrew Harcourt</Author>
      <SnippetTypes>
        <SnippetType>Expansion</SnippetType>
        <SnippetType>SurroundsWith</SnippetType>
      </SnippetTypes>
    </Header>
    <Snippet>
      <Declarations>
        <Literal>
          <ID>eventName</ID>
          <ToolTip>The name of the routed property (should *end* in ...Event).</ToolTip>
          <Default>Stupid</Default>
        </Literal>
        <Literal Editable="false">
          <ID>className</ID>
          <ToolTip>The type of the owning class.</ToolTip>
          <Function>ClassName()</Function>
          <Default>StupidClass</Default>
        </Literal>
      </Declarations>
      <Code Language="csharp">
        <![CDATA[#region $eventName$ Routed Event

        public static readonly RoutedEvent $eventName$Event = EventManager.RegisterRoutedEvent(
            "$eventName$",
            RoutingStrategy.Bubble,
            typeof(RoutedEventHandler),
            typeof($className$));

        public event RoutedEventHandler $eventName$
        {
            add { AddHandler($eventName$Event, value); }
            remove { RemoveHandler($eventName$Event, value); }
        }

        /// <summary>
        /// Invoke this method when you wish to raise a(n) $eventName$ event
        /// </summary>
        private void Raise$eventName$Event()
        {
            RoutedEventArgs newEventArgs = new RoutedEventArgs($className$.$eventName$Event);
            RaiseEvent(newEventArgs);
        }

        #endregion]]>
      </Code>
    </Snippet>
  </CodeSnippet>
</CodeSnippets>

Windows Communication Foundation Introduction

1 May 2008 00:00 AEST
.NET WCF

Here’s a brief WCF overview I prepared recently for a development team.

What is Windows Communication Foundation (WCF)?

WCF is, in essence, a remote procedure call (RPC) framework for .NET.

Service Contracts and Operation Contracts

This is what your WCF service agrees to do for its callers.

Service contract

Data Contracts

These are the data types that your WCF service expects its callers to understand.

The service will also advertise a service-definition endpoint that will provide these data type definitions to its callers.

Data contract

Events (…or “Duplex Contracts”)

Out of scope for this article, but see How to: Create a Duplex Contract on MSDN for an explanation.

Hosting Your WCF Service

Hosting screenshot

Note the endpoint address:

Hosting screenshot

Adding a service reference to your project

The metadata exchange address is the equivalent of the old Web Service Definition Language (WSDL) address.

Adding a service reference

Calling the WCF Service

Call it just as you would your local methods.

Service method invocation

For curiosity’s sake, this is what some of the generated code looks like:

Generated code


Using DBML/LINQ to Generate WCF DataContracts

24 March 2008 00:00 AEST
.NET WCF LINQ to SQL

You can use the DBML editor to generate classes tagged with the WCF DataContract attribute.

Wriju Ghosh has a good post on LINQ to SQL : Enabling .dbml file for WCF.


WPF Context Menu Doesn't Display on First Load

14 February 2008 00:00 AEST
C# .NET WPF

The problem

When using WPF OnContextMenuOpening, ContextMenu doesn’t display on first load.

The reason

The OnContextMenuOpening routed event is used for dynamically creating a ContextMenu object for a particular UIElement.

Each UIElement has a ContextMenu property which dictates what gets displayed when a user right-clicks on it. If the property is null, nothing will be displayed. If the property is not null, the context menu that it references will be displayed.

The catch? The ContextMenu property must not be null before the event handler first fires, or the menu won’t load. This appears to be a WPF bug.

The solution

Create an empty ContextMenu object and assign it to each UIElement that’s going to have any context menu displayed. In the OnContextMenuOpening event, either Clear() the existing context menu or just create a new one and assign the property to the object reference. Either will work.


Play

Cyclist. Runner. Hiker. Singer. Violinist. Budding skydiver. Photographer. Former semi-pro photographer. Ballroom dancer. Motorcyclist. Occasional sailor. Good with edged weapons. Red Frog. Legatee.

Work

It should go without saying that any opinions, beliefs and other statements made here are my own, and do not represent in any way the views of any employer either past or present. Let's be grown-ups about this, shall we?

I'm Head of IT & Engineering at Etax, Australia's largest privately-held tax agent. Other interesting places I've been before Etax include Octopus Deploy, ThoughtWorks, Readify, Zap BI, Realex Payments and TRL.

I'm a fan of high-quality code, domain-driven design, event-driven architecture, continuous delivery and, most importantly, shipping software that works and that solves people's problems.

I have a number of small open-source creations, including Nimbus, ConfigInjector and NotDeadYet, and am an occasional contributor to several more.

I'm a regular speaker and presenter at conferences and training events. My mother wrote COBOL on punch cards and I've been coding in one form or another since I was five years old.

Sportsball