There exists a recent and disturbing trend in the software quality space for people to view themselves as “test automation engineers” or similar, and to focus on creating large automation suites post-hoc. These suites are normally generated verbatim from acceptance criteria and mapped directly to UI-automation tests. The guiding principle appears to be that no bug shall ever reach production. While this goal is noble in theory, it’s destructive in practice. Worse, however, it also distracts us from the realisation that software quality is about much more than testing.
In this talk, we’ll cover a number of other, often-overlooked elements of software quality such as code design itself, monitoring, logging, instrumentation, SRE, synthetic transactions and production verification tests. We’ll look at production error rates and how to assess what an acceptable error rate is, and we’ll cover measures such as mean time to detection (MTTD) and mean time to remediation (MTTR) as key metrics for the overall quality of a solution. Critically, we’ll then put that into the context of what the system’s purpose is and whether that system is Good Enough.
My name is Andrew Harcourt.
I do Head of Technology/Engineering, Consultant CTO and other similarly-shaped work with companies large and small. I specialise in project rescue, governance and development methodologies.
I'm a Principal Consultant at ThoughtWorks, a co-founder at Stack Mechanics, one of the organisers of the DDD Brisbane conference and, in my spare time (ha!), I also run my own photography business, Ivory Digital.
My main areas of interest are domain-driven design, event sourcing, massively-scalable service architectures and large-scale, high-load, geographically-distributed systems.
I'm a regular speaker and presenter at conferences and training events. My mother wrote COBOL on punch cards and I've been coding in one form or another since I was five years old.
Cyclist. Photographer. Ballroom dancer. Motorcyclist. Occasional sailor. Lapsed fencer.