Testing vs using in software development and evaluation

Updated on:

Over on the software side of the Less Annoying world, Tyler and I have recently been working on a pretty substantial redesign of the interface for LAS. While we certainly subscribe to the philosophy of using our own software as a means of debugging and developing (eating our own dog-food, as it were), the majority of our testing takes the form of rigorously, methodically, and critically testing out each new element as much as possible.

On the other side of the aisle, one of the themes that we like to focus on around here is that when evaluating software, feature lists tend to get in the way as there's no substitute for just trying things out directly. Furthermore, "trying it out" means doing so in the context that you ultimately intend to use the software; it doesn't mean going through and testing out every feature individually (which is basically just a means of indirectly regenerating a feature list). This is the end-user equivalent of "eating your own dogfood."

While there are obviously good reasons for these differences in approach when designing vs. evaluating software, the stark contrast between the two also strikes me as a bit strange. Why are the standard methods for testing software during development and during evaluation so different, and are there times when one might borrow from the other?

Why the differences?
This piece is probably pretty obvious, but it's worth stating anyway. The reason that developers are focused on rigorous testing but users are focused on experience comes down to expectations. A software developer always assumes that someone out there might be using each and every feature of the software (otherwise, the feature probably shouldn't exist). Given that assumption the need for testing and re-testing each and every element should be self evident. In contrast, as an end-user, one tends to come from an expectation that everything in the software already works (there's presumably some developer behind the scenes attending to that), so the only thing left to determine is whether the software works for you.

Unfortunately, in both case, neither of those expectations are likely to be as true as one might like. Inevitably, some features will be less-utilized than others, and with similar inevitability, there are going to be some residual bugs in any piece of software. As such, it's important for both developers and users to take a page out of the book of the other.

When to test; when to use
So, when are some particular cases that a strategic developer/user flip can be handy. From a development standpoint, testing should be reserved for evaluating code that you've written and where the expected functionality is already established. In the process of the LAS redesign, Tyler and I have spent a ton of time thinking carefully about how each page should look, work, and feel. While doing so, it can be tempting to take a critical approach similar to that used when debugging code. More often than not, however, the things that really matter about the UI come down to how things feel when using it, and that feel is almost always abolished when thinking about things with an objective eye. One way to get around this is to try and imagine an ideal workflow and design the interface to encourage and optimize that mode of use. You obviously need to make sure everything else works as well, but there's nothing wrong with designing with a particular usage in mind. As much as it may irk me at times, Apple has taken this type of design to an extreme with great success.

From a user perspective, the times that one needs to approach the evaluation with a more regimented stance are probably less frequent, but can be quite important. The most important case where such an approach is useful is for critical but rarely used features. Probably the most common example is exporting or backing up your data. Chances are you won't be using such a feature often, but when you do, it had better be rock solid. It's dangerously easy to fall in love with and become dependent upon some software only to discover down the road that your data is locked in a cryptic proprietary format, on someone else's server, or only incompletely available for export. As such, the first thing I do when trying out any kind of software into which I'll be putting information I care about is give a thorough once-over of the export options.

The chances are pretty good that you're already doing something along these lines whether you're developing or evaluating software. An appropriate mix of objective detail-focused testing and subjective experience-centric using is almost always called for, but it's worth taking a step back and thinking about times when your balance might be a touch off.


Sign up to receive updates in your inbox

We'll send you about two emails per month with tips on how to optimize your LACRM account, and grow your small business. Be the first to hear about product updates, and beta testing opportunities!