Good developer products are easy to use. I don’t think that statement is controversial, but one of the challenges of software design is that the things that make a product easy to use are often small and may seem insignificant on their own. Consistent APIs, logical defaults, simple getting started instructions and uncluttered UI are the things that make good products.
And yet, a lot of folks in developer relations feel a bit sheepish entering a bug like “Resource should be plural to be consistent with other commands.” Realistically that bug is going to annoy a bunch of developers, but if it is the only issue, it probably won’t block adoption of an otherwise great product. When these little issues pile up, it can be hard to explain that the net effect is a library or tool that no one wants to use.
This is where friction logs come in handy. A friction log is a document that lists all the little stuff that makes the tool hard to use, but frames it around a narrative or customer use case. I learned about them in my first weeks at Google. We often ask new dev relers to write friction logs for common scenarios that they are familiar with from previous work. The critical difference between a friction log and a bug list is that the friction log puts the issues in context. It tells the story of the entire user experience from start to finish. Well written friction logs are one of the best tools I know for developing user empathy in folks who don’t get to talk to customers very often.
The friction log template we use at Google has a few common questions at the top: the logger’s name, the platform/language/browser if relevant, the date, and the product(s) used for the log. After those basics, the logger describes the scenario they are trying. In my opinion, this is the most crucial part of a friction log. The scenario should be simple, ideally no more than two short sentences. It should be something familiar enough that no one is going to react with, “Why would someone want to do that?” when they read it.
For example, the first friction log I did was “Upload a picture of my cat to Google Cloud Storage.” I’ve also done “Move a Rails website to Google Cloud.” These scenarios make good friction logs because they are things that many users are likely to do.
The rest of the document is a log of what you did and any reactions you have while you work through the use case. If you searched for something online, write down the search terms. If you follow the third link in the results, make a note of that and record the URL. If you type commands into a command line or write code, copy and paste it into the log. While this is similar to recording bug reproduction steps, there is no need to find minimal steps and recording your reactions, such as “Now I’m frustrated” or “Copied and pasted this from the docs, didn’t bother reading the prose” helps tell your story.
While friction logging it is vital that you try to forget your inside knowledge and approach the problem as a user would. This is one of the reasons new hires are great friction loggers; they don’t know enough about the inner workings of the product to instinctively avoid the rough spots.
At Google, we use stoplight-coloured highlights (red, yellow, green) in the log to point out particularly excellent or problematic parts of the experience.
Friction logs do no good unless you share them with people empowered and motivated to make a change. For me, this was the magic of my first few friction logs. I submitted them via a web form, and they got routed to the appropriate product manager and engineering team. They looked at my log and correlated known issues with existing bugs, leaving comments pointing to the bug in my log and comments leading to the log in the bug. If I found new problems, they entered bugs.
They also left comments asking for clarification and my suggestions for how to improve parts of the experience. In addition to any product defects, the friction log pointed out places where SEO was poor, or a particular type of documentation was missing. That’s something that I find hard to write a traditional bug for but happens naturally with a narrative log.
I’m sure many companies and teams do something similar. When I was in QA, we did acceptance and scenario testing as part of release sign off. On the surface that seems the same as friction logging. The difference I see in friction logs is that the point isn’t to find things that don’t work and enter bugs. Instead, the focus is on the entire user experience and recording all the places where things were rough, where a user would experience friction. Since there’s a log of the whole experience, you call attention to how the whole product fits together and how it works with other products.
Most of my friction logs are relevant to several engineering teams. As a former tester, I appreciate that I can write a friction log on a scenario that isn’t on the critical path. Acceptance testing was all about what we were about to release and core product scenarios. I can write a friction log on nearly anything. Some of my best friction logs came from real-world experiences, like trying to deploy an app I was writing as part of a volunteer project.
If you haven’t heard of friction logs before, I hope you can add this tool to your dev rel toolkit. If you have used friction logs or something similar, I hope this article gives you a different perspective on how scenario-focused feedback can be useful.
How can public datasets, along with tools like Google’s BigQuery, help us to do a better job of developer relations?
Practical developer marketing metrics advice with examples and learnings from three campaigns.