At DevRelCon London 2019, Box’s Cristiano Betta shared the practicalities of how they have taken an engineering approach to their API documentation.
Cristiano: A long time ago, which is about a year and a half ago really, when I joined Box, we were trying to translate our developer documentation. We had a website developer.box.com but we had a very large Japanese market. If your company works in Japan, and needs customers, the way they operate there is very differently. A lot of reselling is done through partners in our case and those partners basically do the integration on Box’s behalf. And these partners really need to work with our documentation, our developer documentation quite intensively and Japanese, spoken Japanese, is just not that common in Japan. It’s really low on, kind of, on the index of countries that kind of speak English fluently. So, we did eventually get there but it was a lot of work.
At the time, we were hosted on ReadMe. ReadMe allows you to make a backup, you export it, and then we thought “Cool, we translate it and it will just upload it again. Seems pretty straightforward, pretty easy”. Except – not that easy. Turns out when you do import and export and import kind of thing, you’d expect it to be symmetrical. If I create an export and I just import it over here again and should be getting the same system at the end. Not how it worked at ReadMe. At ReadMe we would get a lot of problems with things just being out of order, certain tags that were exported, just not being imported again. So what we thought was going to be a pretty straightforward process, turned out to be a lot of work. It took us ten seconds to create a backup, then it took us three months to do that initial translation, but we thought “Cool, then we can start doing translations over and over again and they should be pretty quick” but every import basically took us a month of painstakingly ordering everything back into the order that we wanted it. And that wasn’t everything. It was just the tip of the iceberg.
There were many problems: hard to translate, we had no audit trail, we couldn’t tell who was writing what, at which time, there was no review process because you can’t write a draft in ReadMe, there was no modularity, it was hard to refactor, and it was really hard just in general to just ensure that the documentation was in a good quality. If you read that list you’re probably going, “Ah, Docs as Code that’s the way to solve this”. If you’re not familiar with this, Docs as Code is a principle that’s been around for quite a few years now I’ll dig into it in a moment, again. You can read more about it on writethedocs.org and basically it says a couple of different things. It says store your documentation in GitHub or SVN, or whatever you want to use, but these days a lot of it is in Git. Then build the documentation automatically. Review the documentation as you write it. Publish the documentation without much user intervention. So I was interested in that ’cause I’ve never fully gone through that with any customers. And I was very interested in learning more about that, but I felt like there must be more to this. I think this is a pretty simplified model.
Before I go into the learnings we got from this, is, who am I? I’m Cristiano, I’m a Senior Developer Advocate at Box, and I am also a Developer Documentation Lead. Why am I our Developer Documentation Lead? It’s because I said, “We should fix this”, and they were, “Cool, you do it.”
I used to work with Hoopy, I used to be a Developer Advocate at PayPal, and I used to scare the crap out of people here on stage by trying to onboard onto their products in 25 minutes. I’m not doing that this year, I’m doing a serious talk. I also write on my own website, although I haven’t done that in a while, but I do these breakdowns of developer experiences there, as well.
Docs as Engineering. That’s the title of my slide, title of my talk. What I mean with that is when I’m talking about engineering, I’m not necessarily talking about architecture. I’m talking about software engineering, the thing that we’re all so very much familiar with. And within software engineering we’ve got a lot principles that we’re all familiar with. We’ve got a lot of principles that we try to hold ourselves to. We have the idea that we should be writing modular code, and that we should keep things simple, and that we shouldn’t repeat ourselves, and that we should anticipate change and do one thing well, you know? The Unix principle of doing things well. To test early and lots of other kind of principles that are out there. What I’m trying to see is, how we can take these kind of things, and apply them to the Docs as Code principle, and take the Docs as Code principle and it’s logical next steps. And actually if you read the Docs as Code book, and if you read the website and all the conferences that go on about documentation anyway, there’s a couple of things that are already really being talked about.
One of that is the idea of testing and linting. Testing and linting, when in software engineering, we have a lot of principles around testing and linting. It’s the idea of testing early, testing often, testing the parts but also testing the whole. Just a unit test, but also doing full integration tests. So what does that mean for documentation? For documentation, that means that if you test early, you want to test at the source. There’s a couple of things you can do there. One of the things that we use for our documentation is, part of our documentation is now built off of our API spec. Now, Spectral is an open source tool built by Stoplight. It allows you to basically just validate your open API spec to make sure that it’s actually correct specification. At the end of the day an open API spec is just a JSON file but not every JSON file is an open API specification. So, you run that and it will tell you all the problems that you have in your API specification. And we’ve actually extended this to make our definition of API specification even stricter.
We wanted to make sure that every variable has an example because otherwise we can’t show examples in our reference documentation. You can add your own rules. And you want to test the unit. So one of the other things that we really wanted to test was just spell checking. Both for our API specification as our markdown, we pulled out some open source spell checking libraries, pulled those in and and we’re doing automatic checking to make sure that we’re not trying to extra cool and spelling fields with a zed. That’s already caught, just by importing the old documentation and running that through this, it’s caught so many weird spelling mistakes and inconsistencies. We have teams in the US, we have teams in Europe, there’s different spellings in British English as American English, we’re not actually trying to stick to one actual language. And then of course, test the whole. One of the things that we notice is that we have a lot of internal links that were pointing to pages that just weren’t there anymore. And the old system wasn’t warning us about this, so we started to create our own solution to it.
Now, we moved a lot to writing a markdown, and one of the problems in markdown is if you want to do a path, you often have to go, “this link is a couple “of pages up” and kind of relative to that. And the reason why we have to do that is because we can’t use absolute paths, ’cause we’re translating everything into Japanese and into English, so the root path isn’t necessarily the path that it’s pointing to. So we did a couple of things. First of all, we made it just easier to add our own kind of schema, and say, “I just want to point at this guide.” And we did then the same thing for our API specification where we can say, “Oh, I’m specifically trying to point to this file, I’m trying to point to this API endpoint.” But of course we also added our own checks to make sure that all these pages actually existed. Do all of these links actually exist all over the documentation, do those pages exist in the API specification or not? By standardizing the way we structure our links, we suddenly have made it a lot easier for us to go in and find out whether or not those links actually exist across all of our documentation. I think the first time I ran it, I found 120 broken links. Just by running it once. Then the nice thing is, this runs every time we run our build process. So this is never gonna happen again, we’re never gonna to get to a point where we have 120 broken links again.
Audience member: Famous last words.
Cristiano: Famous last words. Yeah until our test break, obviously. But at least it’s going to give us a warning, hopefully. So modularizing, is another principle that is very popular in software, it’s the idea that you should break things into it’s components to make sure that they do one thing well, that they allow for easy inclusion. And you allow composition over configuration. So rather than configuring things, you can take multiple components that you’ve written, combine them together into some nice, new bits of content. Software, sorry. But we wanted to apply that same thing to our documentation.
One of the things we ran into is, especially with our open API specification, is this is our open API specification once it’s compiled, that is 20,000 lines. The reason for that is, we have 176 API endpoints. We have about 80 response and request objects, so all together it compiles to 20,000 lines of JSON. Well if you try to load that in GitHub to edit, you can sit there for 15 seconds for GitHub to finally get around to loading it and then you start scrolling and it goes, “Wait, wait. Didn’t load that yet.” So that didn’t really quite work. So what we did instead is, we split our API specification into separate files, and each one of these is a YML file, and we came up with a very simple way to map a verb and a path to a file location. And we grouped them together by grouping them in logically grouped endpoints. And then in each file we just had maybe 50 to 100 lines, which a nice, multi-line text that people can just step into and edit. And suddenly it became way easier for people to start contributing to our open API specification. Our engineers can just go in there, find that one file, go in, “Oh, yeah. We’re missing this field, “lets add that in.” Edit, make a PR, submit, and it just shipped into the API specification. So one file per API endpoint. One file per response object.
And then we use another Stoplight product called the JSON Resolver. So basically, create these references that we can basically load it all in, and it just resolves it back into one file that we build out and ship to and English branch. And that’s like that 20,000 lines file that we can then use to import it into other places. Now another idea of using composition is to allow easy inclusion. So again this is the same file, one of the things we ran into is for example is that, that file ID, we have in a lot of different places and we have a lot of the same text in lots of different places that we kept running in to. So we just extracted it, made creative reference to it, and then we just put in a file and we said, “Hey, file ID. this is how we describe a file ID.” And then everywhere that we needed it, we could just go, “Oh, just load that entry in there.” And suddenly it became a lot easier for us to not have to constantly be aware of – well if he’s changed the text over here, then you have to go and change it everywhere that we describe a file ID.
And of course we took this beyond the documentation as well. We created modular UI. Our front end is full-on react, so we were actually able to use what looks like an HTML component in our markdown, but actually it resolves through a react component, so we were able to do quite interesting things. Some of it was very simple, like in this case, it just creates a message that has the nice styling around it, you know we all have these in our documentation. They’re either blue, orange or red. Those are the three preferred colors for that. But we also have some more interactive components. So for example, one of the things we ran into is, every time our SDK team made changes to the SDK, they had to change the documentation that they had on GitHub for all the samples for how to use each method, as well as going into our documentation, our developer documentation and updating it there as well. And they never did that. So what we ended up with is instead we just went into, so this is our Windows.net SDK and we just went ahead and we added these little tags, the little hidden HTML tags in all of our markdown files right before the code block that described exactly that endpoint. And then we just scraped all the markdown on every build we just, pull it down, scrape all the markdown files, extract all the samples, and then what you can now do in any markdown file, you can say, “Oh, I just want the samples for that endpoint.” And it will render every possible language that we find. So suddenly any language that we support for that endpoint, boop, it will just render it. If we don’t have a java one, that app will just disappear. It’s also allowed us to basically through that, create overviews where we can now look at it and go, “Hey, which samples are we missing?” “Which ones are we actually lacking?” “Which ones are we using, but we don’t actually “have them in every language?”
But what is interesting, I think, is we took that a step further, so our open API specification, it’s open API 3.0 that we use. There’s a lot of tools out there that you still use the old specs, Swagger 2. Cool, we just have our CI backported, so we also have a Swagger 2 spec that is just the same file as our open API spec, but backported. But then, we also were, cool, lets auto-generate our postman collection, so we have a postman collection that it’s just auto-generated based off of our API specs. So every time we update our API specification, it updates our documentation, it pulls in all the new samples, it pushes it out to postman, it builds out a Swagger 2 file, all of it happens all at the same time. Which brings me to the idea of a pipeline. I think we develop software these days, very much within the concept of pipelines. Things go through staging environments, developers ship codes, things end up in a staging environment, it ships out, it gets approved, it gets reviewed, QA, you might do some AB testing, and then eventually it rolls out to the write environment, to the master environment, etc.
The same things can be applied to documentation to some extent. So what is a pipeline? Pipelines do a couple of things, first of all they ensure quality. So within the pipeline we do our testing, we do our validation that our code still does what it’s supposed to do. It maximizes our value because it allows us to ship to multiple locations quickly, faster, in parallel. And it speeds up delivery. But what I think is interesting, it also encourages responsibility. Because it basically says, hey anybody who documents, anybody who writes software, if you manage to merge this into master, it will ship, it will go live, it will be part of the product, it will go out there almost instantly.
So for documentation, what does that look like for us right now? This is the is the key slide everybody wants to take a photo of. I’ll run through for a moment, what it does. So it starts over here, so we have our microcopy, our guides, this stuff is mostly markdown and a couple of YML files, as well as our open API specification, which is mostly YML. And then we just have Travis pick it up, do the spellchecks, do all the validation, etc. And then it writes it back to an en branch, which is our English version. Mostly as, again, as markdown, as JSON, as compiled and sanitized sources. And then every time the on branches get updated, or any time our SDK’s or any time our Gatsby kind of source, which is our static site generator that we use. It calls an Netlify function which is a serverless function. Which basically then determines which of our stages to trigger, because we have a staging environment, we have a master environment, we have a translation environment for our upcoming Japanese full translation. And then Netlify will pull all those sources in, build the site, and then ship it out to Netlify, which we’re using as our hosting provider.
But of course we took that even a step further. Like I said, we’re not just building documentation at this point because we have great source materials now that we are validating and we’re sanitizing, we’re taking that a step further. So to start off on the bottom, we have a couple of different build servers actually, so at the bottom we have our in-house build server. So every time our English sources change, it creates a snapshot, translates it in house, sends it off to our translation teams, and then writes back to Japanese file to a Japanese branch. And then every time that happens, Netlify will pick up any of these sources and build them out as documentation. But every time these sources change, Travis will also put it up and currently it’s pushing a postman collection. And it’s not just pushing it out in English, it’s also pushing it out in Japanese, ’cause we already have our specification translated. And what we’re looking at in the future is, also for it to start influencing our SDK’s. Currently our SDK’s are hand written, but we’re already putting tools in place where we can start putting, at least notifying us when our SDK’s are lacking samples, lacking functionality, opening maybe a ticket, an issue, etc. And then we’re getting to the point where we can now, detect changes in any of the pages, in any of our SDK’s, and start maybe auto-generating our changelogs. Just creating that off of the source material, rather than hand writing it every time we have a new release.
So where are we now? This is the site, doesn’t look very good, ’cause I haven’t really involved the design team at all yet. That’s kind of like, next few quarters. We have about 250 guides or something by now. We’ve managed to actually grow our guides by 150% or something while we were doing this. Every page has this nice little edit this page link, at our top right, you can click on that, it’ll take you straight to the source. Somebody can just quickly edit it, change it, and if they get approved and then get merged, then it get automatically gets shipped out, allowing that cycle of responsibility to move forward. We have a API reference page, which basically has all our 176 API endpoints, as well as exactly what parameters it takes, all of these samples on the right, they’re all straight being pulled from our SDK’s and kind of integrated. And one of the nice things is, we also managed to allow people to play around with our API straight in the documentation, because it’s not just a static site generator, it is actually just a react app, to some degree. We’re suddenly able to add these bits of interactivity in it so people can just play around, get the items in a root folder and see the responses on the right hand side. We built in our own search, of course. And of course, we’ve also started to doing our translation to Japanese, which was the whole reason why we pretty much started this off in the first place.
Currently, we have just the API reference available in Japanese, which is already proving ridiculously useful to our Japanese teams. We’re planning to ship the full side, completely fully translated by the first or second week of February, that’s where we’re aiming at right now. It’s a large translation initially, but once we’re there, every two weeks, snapshot, translation, and it’s fully automated, I don’t even have to involved in it, which makes me very happy. And of course, it’s fully mobile. It’s a PWA, it works offline, all those kind of things. Because modern web technology is cool.
To recap, I think Docs as Code is an awesome principle, there’s an awesome amount of content written on that. But I feel like personally, I feel like there’s so much value we can get if we don’t just treat our documentation as code, but we treat it as a proper engineering project. So beside these points of storing docs in version control, and building documentation automatically, reviewing and publishing without user intervention, I feel like we can also get to the point where we can say, “Look, let’s add to that a little bit and say, “test anything that can be tested”. Let’s make sure that you’re documentation quality is gonna remain good, it’s gonna remain good going forward, there’s no regressions and every time you run into an issue, just like with software, cool, let’s make the build fail first, and then fix it. Those kind of things. And then modularize to prevent duplication, so that we can reuse our documentation, not just within the documentation, but also reuse our source material to maybe create other content that we weren’t creating before. All of that will allow us to maximize value. And I think using a pipeline is the only way to kind of tie all of that together, to make that kind of skill to some degree. So that to me is, I think really the value of Docs as Code, if we take it a little bit further, a little bit broader. So that’s it, thank you very much.
When the pandemic tore apart Neo4j’s plans for a community conference, what did they do next?
In this talk from DevRelCon Earth 2020, Sarah Novotny discussed the role of empathy within your community activities.