tl;dr:
Recently I was working on setting up an internal tool[0] for myself and fellow developers to test Logstash filters. We are currently upgrading the ELK stack we use for centralised logging, and I wanted to increase confidence in the Logstash filters we deployed. Previously, our Logstash filter configuration ran to about 900 lines of code. New changes could not be tested without being deployed. There was nothing to catch a regression except a human who happened to notice.
I predicted, if I wanted to increase confidence, this meant not only making it possible to test changes to filters, it had to be easy. This, I felt, would be crucial in maintaining test coverage over time — my fellow developers must be able to discover how to add tests, and find the process pain-free enough to be maintain the habit.
To help me in this, I considered how I would get the feedback I needed. Pair programming and code review can be immensely useful, but they’re different, tending to focus on implementation. What I needed was feedback on how discoverable the tool is. I wanted to know, “can a developer, tasked with changing the Logstash filter configuration, discover all they need without any prior knowledge of the tool?”.
Enter usability testing.
One of the first hits when searching for “usability testing”, contains this description:
"Usability testing is a method used to evaluate how easy a website is to use. The tests take place with real users to measure how ‘usable' or ‘intuitive' a website is and how easy it is for users to reach their goals"
Replace “website” with “developer tool” and the principle remains the same. I might not be designing the website our paying customers use, but I still have users. Users who have a goal. With that mindset, I decided to conduct a usability test with some of my intended users, to observe how easy they found it to reach their goal. Conducting the Usability Test
My test subjects were my two colleagues, Adam and Gary[1]. I ran the test individually with each of them, using video chat, which also allowed them to share their screen for me to observe. My process was as follows:
That was basically it. I didn’t give any further direction. Didn’t introduce them to the tool, where they could get it, how it worked or anything. I just shut up, listened, and took notes.
Then I got to learn which parts were easy, such as: discovering the documentation; finding existing test cases; how to execute test cases. Even confirming that the name of the git repository is discoverable is useful — after all, naming is not easy.
I also got to learn what was hard; that the biggest challenge was in setting up test data and expected output.
I only intervened a couple of times, when some aspect of the tooling had been misunderstood, and they had strayed off the path far enough that I wasn’t gaining any insight about what I was trying to test. For example, at one point while trying to find an example input, Adam began querying ElasticSearch and working through how to form the right query. But at that point, the log is the output, and has already been transformed by the Logstash filter and by ElasticSearch, so it wouldn’t serve as a representative example of input. I felt there was no point in letting Adam continue down that path; it was obvious the tooling needed to improve, and we could talk later about potential improvements.
Both Adam and Gary were able to complete the task, which is a good indication that several aspects of the tooling were discoverable. The interventions that were required indicated that other aspects needed to be improved.
With the task complete, and Adam and Gary both introduced to the tool, they could never be “new users” again. The old adage “You only get one chance to make a first impression” applies here[2]. Being unable to use them as test subjects in the same way again, I took the opportunity to describe features or documentation they had missed, and ask their opinion on how it could be improved and made more useful and discoverable. Those changes can then be tested in future, when I find more blank canvasses eager developers to run the usability test with.
Lessons learned
So if you’re working on internal tools, or APIs, don’t forget that while you may not have direct customers, you (hopefully) have users. Watching them trying to interact with what you produce, is illuminating, insightful and motivating. I encourage you to try.
[0] why this is an internal tool, and how I implemented it, is a whole ‘nuther blog post.
[1] names unchanged to condemn the guilty.
[2] not until I can get my hands on a Neuralyzer
Originally published on devblog.timgroup.com
ABOUT GRUNDLEFLECK
Graham "Grundlefleck" Allan is a Software Developer living in Scotland. His only credentials as an authority on software are that he has a beard. Most of the time.