As Director of Interviewing at Karat (now Head of Developer Community) and co-founder of DangoorMendel, Lusen Mendel has worked with managers and engineers to level the corporate playing field. In this talk from DevRelCon San Francisco 2019, Lusen talks through mental models and practices to build and retain more diverse teams.
I am new to dev rel, actually, I have a new title this week, head of developer community and… Thank you. Thank you. And you guys are one of the best, like, kindest, most welcoming community to join. So thank you. I’m learning a ton here.
So, yeah, I’m head of developer community at Karat. Up until this week, I was director of interviewing. Karat is a company that focuses on interviews. We just do interviews, technical interviews, not sourcing, not recruiting. We do interviews for other companies. And so, for the past year, I’ve been, along with a fabulous team, in charge of hiring, and onboarding, and managing hundreds of interviewers. And so, that forms the basis for this talk.
So I want to talk about how we can hire and retain more people who are underrepresented in tech. And before I get into that, I’m going to have to talk about standards. So this is a quote you may have heard before, “I care about diversity, but we’re not a non-profit. We can’t lower the bar.”
And it’s a sentiment that I’ve thought a lot about. And my conclusion is, it’s not very helpful, right? Because it’s framing people who are underrepresented as being under-qualified stereotypically as a starting place, right? It’s creating this bias in the speaker’s mind and anyone who hears this that, “Oh, if you’re outside the norm, you’re going to have to prove yourself.”
And my experience anecdotally is just that the opposite is true. That people who are receiving a lot of discouragement actually get really qualified to, you know, prove themselves and also tend to have a lot of professional and social skills to, you know, they’re just used to dealing with difficult conversations and situations. But the important thing here is just that this quote is just not helpful.
This mental model is not helpful, right? It’s not inspirational, it’s not motivational. It doesn’t actually help us make better hiring decisions. You know, it’s just not helping us connect and grow with the people that we are working with.
So I want to give you all a better mental model. This is a street, right? So, like, you know, here’s someone walking down the street, steps down, crosses the road, steps up the curb on the other side, continues on his way, right? Simple landscape works for Mr. Walking.
It doesn’t work so great for some other people, like, here’s this baby carriage. You know, it’s going to have a hard time with that curb. You know, actually, it is possible for this landscape to work. Right? Like, that’s great. You know, you just take this classic very efficient circular wheel, and you squeeze it and contort it into this other shape, and, like, it does degrade the performance a little bit. It’s, like, you know, not a pleasant ride anymore, but, you know, it works, right?
So what I’m trying to say is this landscape does work for everyone if you try hard enough, right? No, I’m not gonna say that.
That’s ridiculous. Like, why don’t we just use ramps? It’s a wonderful solution, you know? And the cool thing about ramps is Mr. Walking here, like, he can still cross the street, right? It doesn’t make it any harder for this situation, right? But it does make it easier, safer, more comfortable for a lot of other people to use the sidewalk across the street.
So back to mental models. What I’m talking about here is, you know, we have this minimal viable pathway that we started with. It works with the narrowest, you know, smallest number of user groups, like one and a half users.
And what we want to do as leaders and as managers is, you know, we’re striving for a high-quality pathway, right? Something that’s accessible for the most, you know, the most number of people. We want to have the biggest pot of people that are coming in and that we’re hiring, and that are productive in our organizations or in our communities.
So, you know, whoever’s having the hardest time, who’s ever facing the biggest challenge, like, that’s what we want to focus on, right? Because if we can make things better for them, we’re going to make it better for a lot of other people too. So, you know, let’s not be scared of those challenges.
Okay, for the rest of this talk, I’m going to look at five different milestones in a career pathway and look at, well, what’s one thing we can do to raise that bar and bring more people forward with us? Cool.
So I’m going to start with recruiting. So this rectangle, this is representing all the people who might, want to work in some particular role that we’re hiring for. And, you know, there’s a line somewhere where, you know, some people are qualified for this position, some people aren’t yet qualified. That’s fine. That’s just how the world works.
Okay, here’s where things get interesting. You know, here’s the circle of candidates who are actually applying. And, you know, what might jump out to a lot of us at first is, “Oh look, there’s a ton of candidates that we’re having to spend resources on who aren’t qualified.”
You know, and that draws our attention to, like, what our job descriptions look like? Do we need to make something in bold, maybe increase the font size of our requirements, really make it clear?
Like, you know, how do we cut down on having to spend our time with those folks? And this is an important challenge. Actually, one of the reasons Karat exists is to really help us, you know, to help companies, you know, with their pipelines.
But there’s another problem here that we want to look at, which is that, and it’s a more invisible problem, right? It’s all the candidates that we don’t see, that we don’t even get to make a decision about, right? Because they’re not even applying.
So one of the ways we can raise the bar when we’re recruiting is to do more outreach, less screening out, right? I’m going to get to interviews later. Like, that’s a different part of the process where we get a chance to make decisions, you know, and decide, you know, who moves forward.
But when we’re recruiting, when we have job descriptions, for example, like the people who are reading those job descriptions, they’re making the decision, not us, right? They have their own agendas, optimisms, their own cultural context about, like, what requirements really mean in a job description, you know?
So that is a challenge we have to face, but we want to be really careful to make our job descriptions clear, accurate. But invitations, everything about recruiting, it’s like we want everyone to come to this party. How can we make everyone, including specific people who maybe haven’t always felt welcome, like, really speak to them and get them to come, so that then we get to make the decision later about what, you know, what those next steps might be?
Cool. So now we’ve got lots of great people applying to work at our company or join our community. How do we interview them? What can we do to raise the bar?
So one thing we might be concerned about is equity. This is the idea that every candidate is as equally likely as every other candidate to perform their best and demonstrate their skills, right?
So here’s an example. We want to hire a DevOps engineer, right? Like, someone who’s very comfortable using Chef, has written some cookbooks before, we get some candidates. And this is great. You know, the two people in the top row, they are experienced DevOps engineers. They would actually be qualified for this position. That’s nice.
On the bottom row, whoops, we had some actual chefs. Job description wasn’t that clear. And so, an equitable interview process or, you know, a good interview would successfully figure out, like, who had the DevOps skills and who didn’t, right?
And being able to determine what, you know, what are the things we want to look at that’s going to lead to this, you know, accurate assessment of who’s going to be successful. That’s the signal. That’s what we want our interviews to focus on.
The opposite of signal is noise. Say inadvertently, it turned out with our interviews that, you know, on the left side, I was just hoping it was mirrored. Good. On the left side, you know, these two gentlemen, they have highly connected professional networks. They’ve interviewed a ton and accidentally, “Whoops, we’ve made a positive assessment to move forward with this chef, not a DevOps engineer, the actual chef.”
And, you know, he’s going to have a hard time in this position, right? So that was a false positive. And, you know, that’s very visible, and a lot of companies try to protect against this. You know, you don’t want to be making bad decisions. It can look expensive. You don’t want to, like, overprotect against this, right?
Because the other problem is that there was another DevOps engineer who we declined. We made the negative assessment, that was wrong, a false negative, and now we have to spend more time, and money, and resources continuing our hiring process.
And, you know, we’re not building our teams as quickly as we could be. So that’s also expensive, but it’s less visible. So what can we do in the interview process to make sure we’re, you know, we’re not missing, you know, those missed opportunities. You know, we want to avoid those as well.
So it’s really important to look at the signal, enhance the signal, reduce noise. And one of the ways we can do that, especially to avoid false negatives, is to look at candidate experience.
So candidate experience, it’s really important in interviews, right? Hey, if someone has a nice experience, even if they don’t, you know, succeed in the interview, they’re going to tell their friends, they’re going to say nice things about you on Twitter and Glassdoor, that’s really great. And third, they’re actually going to perform better if they’re doing well.
When we get nervous, when we get anxious, we have weird social interactions that stress us out. Maybe even when we’re up on stage like this, it impacts how we think, and it impacts our body. I’m, like, really thirsty right now. Right? And so, people, you know, like, your mind actually slows down when you get stressed, you, like, stop taking in input and processing things as quickly, you forget things.
So helping people have a good experience actually helps them perform their best. It’s not a cheat code, they still have to, you know, demonstrate their skills and pass the interview. But, you know, you’re able to, you know, get a good assessment that’s equitable.
It’s also important for interviewers too. You know, when an interviewer is able to connect with a wide variety of people that will actually cut down on bias and help them do a better job.
So one of the things you can do is look at intentional communication. So everything we say and do during an interview is going to fall into one of these three kinds of communications, right?
Rather clarifying what the candidate should do or providing encouragement, you know, being an active listener, you know, engaging with them, being supportive. Maybe we’re giving hints.
This is, like, a tricky thing, right? Because if you tell the answer to someone, you’re reducing the signal because they’re not showing what they can do, you know? But sometimes someone gets stuck, and we feel like we’ve exhausted the signal in this area. We have 20 minutes left in the interview. Let’s move them over here to see if we can keep going, explore, you know, something else.
So, you know, for an interview program, one of the things you want to do is to make sure that all your interviewers know exactly when and when not to do each of these three things, and how that impacts or doesn’t impact the assessment. So everyone’s on the same page. Cool.
We’re doing these great interviews, got all these candidates coming through, but we still have to make hiring decisions.
So one thing we might be concerned about is consistency. The outcome of each interview depends on the candidate’s abilities and not on external variables, right? Like, how a candidate does in an interview, it shouldn’t matter who the interviewer is, right? It should just matter on who the candidate is.
You know, here’s a candidate, wants to get hired, here’s all the people who could interview them. You know, one of the interviewers, one of the engineers actually is the person who referred that candidate, you know, another engineer. They were up all night, on call, there was an emergency, they’re having an awful day today, super upset.
Someone else, they just got back from vacation, they’re, like, not sure what’s going on. Is this a junior or senior role? Like, what questions should I ask?
But ideally, the program sets all these interviewers up for success, they all come to the same conclusion. That would be a consistent, you know, interview process and consistent hiring decisions. You know, and if they don’t, that would be bad. That’d be something, consistency that’d be kind of weird, right? Like, this candidate isn’t changing.
So most of us can’t use our time machines to go back in time and re-interview a candidate with a different interviewer just to check what the situation is. Are we consistent or not? How do we improve?
So what we have is we have these interviews, we’re trying to make a binary decision, yes or no, do we move forward? And the thing that sits between those two is the write-up. And so, the way we can level up our hiring decisions is to really get explicit about our write-ups and make structured write-ups.
So we might start with, “Okay, you just come to this debrief with like whatever’s in your head or some random paragraph.” No, we usually try to specify like, “Pay attention to these dimensions, you know, assign stars to these things.” We get a little bit of that apples and oranges issue, like my three stars is your four stars. Okay, so let’s get even more explicit. Let’s have some checkboxes and radio buttons and, you know, write down what we’re actually observing.
If we have this really nice rubric for every candidate that’s coming through an interview, but we still have to come to this yes, no decision. And so, we want to provide guidelines. Any interviewer or engineer should be able to, or dev rel, whatever the position is, right? You should be able to look at this rubric and come to the same conclusion about whether or not this person should move forward in the process.
At Karat, we’ve done over 30,000 interviews. So we actually use an algorithm to take the observations that are coming out of our interviews to figure out, you know, what the recommendation would be for a specific role.
And that gives us really nice data. Like, all this structure, once we have reliable, consistent interviews, we now have structured data, we have signal low noise, and we can then analyze, you can look at our questions and, you know, calibrate them against each other. We can, you know, look at adverse impact, how are men coming through our interviews versus women.
And so, we can use all this information to then inform and improve the interviews themselves. We can look at, you know, the quality of different sources, all sorts of things.
Okay, so we’ve just hired all these great people, we’re feeling really pumped, and now we want to set them up for success.
And this is a situation you may have found yourselves in before. You have a really nice team or a really nice community. It’s kind of small. Everyone’s, you know, communicating well, sharing knowledge, equal contributors. You do a bunch of hiring, you’re growing, everything’s going well.
But then you get to this point where you realize there’s a few people who are really loud, kind of making all the decisions and essentially informing other people of what they should do. And other people who may have been quite senior in your community previously, very strong contributors may have stepped back. They’re kind of, you know, not contributing as much anymore.
And, you know, to some extent, hey, I’m a manager. I like making decisions and telling people what to do. So, you know, that’s not the problem or it might not be the problem. But the problem here is that it’s unintentional, right?
There was this vacuum that some people perceived, and people who were loud, like, stepped in and stepped forward and other people stepped back. And that wasn’t, you know, it wasn’t necessarily the case that they were good decision-makers or good facilitators of the community, it just, you know, felt like they needed to fill that space. So to really raise the bar to enable everyone, fill leadership vacuums with intentional leadership.
And two things you can really do to impact that are define roles explicitly, this is what it means to be a mentor. Like, “this is how you become a mentor, this is what it means to be successful.” Or, you know, “this is how you become a reviewer, and this is what it means to be successful.”
And, you know, as management or leadership, you’re trying to provide some oversight to make sure people are on track and doing good things, getting rewarded for, you know, if they’re doing good things, you know, getting promoted.
And define how decision-making works. My favorite is Sam Kaner’s, Facilitator’s Guide to Participatory Decision-Making. It’s a really nice look at how to make sustainable agreements. There’s lots of different decision-making rules, consensus, majority rule, person with authority makes decision with input, and it looks at how and when to use each of those things.
But, yeah, so you want to recognize when people are doing great things and then, you know, promote them to fill those vacuums.
You know, the good thing here is that we’ve just gone through a bunch of best practices for, you know, helping to assess people, and hire them into a role. It’s very similar when you want to assess someone and move them forward in a career path or even to, like, a role that might just be, like, a badge.
So just to summarize, invite proactively, right? Shouldn’t just be the squeaky wheels, who are getting raises, or promotions, or, you know, who, you know, and is turning you into like a reviewer in a community.
Look at signal, you wanna enhance the signal, reduce noise. And use explicit assessments. You know, in code reviews, sometimes those are just random paragraphs, but there might actually be a little bit of a more structured form that someone could be filling out, so that you’re collecting data about how someone is doing and, of course, fill vacuums intentionally.
So I want to look at that explicit assessments. I’m nearly done with this talk, but just to play devil’s advocate a little bit. You know, at Karat, all of our interviews are recorded. So every interview has a second interviewer, a more senior, you know, intentionally-filled leader or interview reviewer watches every interview or reviews it in some way to give feedback.
And that review is structured itself, so we’re getting data about how aligned each of our interviewers are, and, you know, how are they doing on candidate experience or, you know, following certain guidelines.
And we had an interviewer say, told me recently when I was asking for feedback, she was like, “You know, some people might think this is a little creepy.” But I was like, “Phew.” But I trust leadership because I know they’re looking out for me. Instead of having to advocate for myself, which she was very uncomfortable doing.
She, you know, she’s outside of the norm. She’s a person of color. That’s not a reason to not, you know, want to self-advocate, but she wasn’t comfortable doing that, and she felt like my trust has increased because I was proactively promoted. People were looking out for me.
And so, I hope that is a takeaway from this talk that really talking about management and leadership as a relationship between people and how can we create healthy relationships where, we’re proactively taking our power with responsibility, to bring people forward and really look out for folks and help them stay on track. And then celebrate and reward them as they do good things.
So that is my talk. Thank you so much.
How can public datasets, along with tools like Google’s BigQuery, help us to do a better job of developer relations?
Practical developer marketing metrics advice with examples and learnings from three campaigns.