How to write stuff that gets on the front page of Hacker News

EDIT: This piece did indeed make it to the front page of Hacker News in a meta victory. Thank God.

Hi. My name is Aline, leeny on Hacker News. My team at interviewing.io and I have written a lot of stuff, and most of it has been on the Hacker News front page — of the 30 blog posts I (and later we) have submitted, 27 have been on the front page, and over the last few years, our writing has been read by millions of people.

We wrote most of these… though some are just great content I underhandedly beat the author to submitting and feasted on the ill-gotten karma.

Though the first few things I ever wrote were driven by a feckless mix of intuition, terror, and rage (I write a lot about how engineering hiring is unfair and inefficient and broken), over time I began to notice some common threads among my most successful posts, and these realizations have made it easier for me to weep less, write more, and to pass on the learnings to my team and create a somewhat repeatable system of content generation.

I’m not altogether unaware that the title of this post has a whiff of hubris about it and merits some amount of disclaiming. I don’t claim that my way of writing is the only way, nor do I claim that it’s going to work forever. Every time I write something and submit it, I ask myself, “Is this it? Is this the one where I find out the formula no longer works?” It’s terrifying and it’s fickle, and I’m beyond grateful to the HN community for reading interviewing.io’s stuff as long as it has.

What makes content sticky?

This list isn’t exhaustive, and surely there are other strategies to crafting sticky content, but I can only talk about the two strategies that have worked well for us. The most effective strategy, in my experience, is to tap into a controversial point that your audience already holds and then back it up with data that you have to confirm their suspicions.

The second strategy is to share something uniquely helpful with your audience that makes them better at some meaningful aspect of their lives.

I use both of these techniques repeatably, but in my experience, the “controversial with data” technique is way more effective than being “helpful”. More on that later, but first, here’s how to execute on both.

Confirmation bias, cocktail parties, and data

What is confirmation bias? It’s why people enjoy saying “I told you so!” It’s the tendency to interpret new information in a way that reinforces existing beliefs… preferably controversial beliefs that your audience suspects are true (and are probably frustrated about) but can’t definitively back up.

Confirmation bias comic

In our case, it was a bunch of aspects of status-quo hiring, stuff like: resumes suck, LinkedIn endorsements are dumb, technical interviews are being done badly and the results aren’t deterministic, and so on and so forth.

So, you take that kernel of frustration, and then you put some data firepower behind it. Find the data that you have that no one else has, and use it to prove that those controversial beliefs do indeed hold water… lighting up the same parts of our brain that makes us fall prey to confirmation bias, in other words.1

Another way to say it is that the best content marketing, in my mind, is the stuff that makes people smugly want to repeat it at cocktail parties. I don’t say that with judgment or derision. I derive much of the pleasure in my short, brutish life from being smug and right. It’s not something I’m necessarily proud of, but it’s true.

So, if you have something controversial to say, why does having data matter? Because no one cares with Aline “Dipshit” Lerner thinks about hiring. You and your readers might hold all sorts of controversial opinions about the world, but until you’re really famous, your opinion doesn’t matter more than anyone else’s. But data (especially if it’s proprietary) can elevate an anecdote to gospel. Data provides you with the credibility that nothing else can at this stage — no matter who you are, if you have compelling data, engineers will listen.

The one thing you really have in your favor in these situations is that, because no one knows who you are, the more sophisticated your audience, the more likely they are to take your good content seriously. You don’t have a brand, you don’t have a comms team or a brand to protect, all you have is the unvarnished truth from the trenches.

With the attributes above in mind, think about what cool stuff you’ve discovered by virtue of working at your company. Do you have a data set you can mine for unique insights? Does having operated in your space at depth put you in a position where you can confirm or deny controversial assumptions about some aspect of human nature or our daily lives? If you’re a founder, what unique insight do you have that made you start this company in the first place? If you’re an employee, what part of the mission/vision/execution really resonated with you, at the exclusion of other options you had in the same space? Then, once you’ve identified the right sticky tidbit, it’s up to you to distill it into plain English and then back it up with data… which in practice means some very clear (and maybe pretty… though clear trumps pretty) graphs or visualizations.

It’s tempting to fall into the trap of creating content that tells rather than shows, and the myriad blog posts out there to the tune of “here’s how we run meetings” or “here’s our product process” are proof of that. Typically, posts like this don’t do very well because frankly, no one cares about how you run your processes until you’re a household name that others are trying to emulate. One exception to this rule is if you want to highlight something polarizing you’re doing. In that case, feel free to shout that directly to the world so it’s loud and clear and makes its way most directly to the fringe community you’re targeting. In other words, if you’re really gung ho about TDD, you can write a blog post called “Why we ALWAYS use TDD with no exceptions”, and it’ll do great because of confirmation bias among TDD evangelists, probably the very people you want to target.2

Being helpful

Though, in my experience, the controversial cocktail party technique is the most effective, you can’t always bust out controversy at the drop of a hat, and you might have plenty of useful, interesting things to say that don’t tickle our desire to be right. If you can’t be controversial, then be helpful. Note that “helpful” means giving your readers specific, actionable advice about things that have a big impact on their lives (love, work, sex, health) rather than general worldviews on these topics.

Also, note that being helpful is not nearly as effective as being controversial. Woe is us.

Controversy is more effective than being helpful… here’s the data

And, there’s this post, which has had one maybe controversial idea so far (namely that making your readers feel smug is what gets you eyeballs and clicks) but no data to speak of. To right that gross injustice, I looked at all the posts my team and I have contributed to Hacker News over the years and tagged them with 3 attributes: whether they were controversial or helpful and whether they had data.

Below is a graph showing the average number of HN upvotes per post type. I looked at whether a given post was helpful or controversial. And for each type, I broke apart posts into 2 subcategories: whether they had data/graphs or not.

hn blog posts - plot

I refrained from doing any significance testing because teasing apart independence here would have been an unprincipled nightmare. For instance, most of our helpful posts didn’t have data, so the relationship between whether the post was helpful or had data wasn’t entirely independent. That said, there’s probably still something useful to be learned from just looking at the means of upvotes for each category, namely that if you don’t have data, then write helpful stuff. It’ll do OK. If you do have data, controversy reigns supreme.

Examples of good content marketing

It’s easy to wax general. I don’t think this post is going to be helpful without some examples. Here are some examples of stellar writing that fall into the categories above.

Examples of controversial, data-driven content marketing

For me, the canonical, original, great data-driven posts all live in the OKCupid blog served as the lodestar of what good blogging could be. These days, the original posts are buried in a cave where no site nav breadcrumbs will go (they’ve been replaced by a sad facsimile or what they used to be, utterly inoffensive, bland, and humorless), and I had to google to find them. But, you know, gems like this:

  • OKCupid – The lies people tell in online dating where the controversial idea is that people really do lie a lot in online dating (this was controversial in 2010 back when it was socially appropriate to be embarrassed that you were dating online)
  • OKCupid – The case for an older woman where the controversial idea is that women over 30 are viable (very sad for me that this is controversial)
  • Uber early blog – Rides of glory where the controversial idea is that you can guess which of your users are having sex based on their ride usage data (this post was what introduced me to Uber and I expect helped meaningfully build their brand… there’s a reason it’s no longer up and I had to link to the web archive)
  • Priceonomics – The San Francisco drug economy where the controversial idea is that it’s very lucrative (and not very hard) to be a drug dealer in San Francisco, and many users are in tech

But… do posts HAVE to hit a nerve and make some portion of the population uncomfortable? Though those tend to be the most fun, this isn’t necessary to produce great content. Hiring is typically a much more tame subject than sex, but it’s possible to write controversial things about it — I’d be remiss if I didn’t link you to some of the things we’ve written. Here are a few favorites that exemplify our take on controversial, data-driven blogging:

Examples of helpful content marketing

As we discussed earlier, not all good content marketing falls into this controversial-anecdote-backed-up-with-data format. Some successful posts just have really useful content that make you better at some meaningful part of your life.

And of course, a few of ours:

So, we have some theory about content marketing, and we have some practical examples. What now? To wit, here’s one last controversial piece for you: drinking a little might make you a more prolific writer.

How to actually make yourself write

The “cocktail party anecdote backed by data” premise is reliable and repeatable and it works, and I expect as you read this, you probably have some ideas about topics you could write about. Ideas are the easy part, however. How do you actually summon the wherewithal to write?

Before he was a hipster text editor, Ernest Hemingway was a churlish, surly alcoholic writer with an allergy to adverbs who coined the phrase “Write drunk, edit sober” and changed my life and liver forever.

When I was maybe halfway through writing Lessons from a year’s worth of hiring data (the first successful post I ever had), I hit what felt like an insurmountable wall. I had already spent months manually counting typos in resumes, had run a logistic regression and a bunch of statistical tests, and was pretty sure that I was onto something — the number of typos and grammatical errors in one’s resume appeared to matter way more than where someone worked or went to school. And there were other surprising findings, too. But when I tried to get the words out, they wouldn’t come. The typos thing was super cool, right? And surely a better, more competent human would do that finding justice when writing about it. In my hands, the work read like an insipid, stilted school assignment. I drew the blinds and sort of curled up in a ball on the ground for I don’t know how long… eventually my ex-husband and his friend who was visiting came home, a few beers in, and peeled me off the floor.

I don’t remember what the two of them said to me exactly, but my brain put it away in memory as something along the lines of, “Stop being dramatic and get out of your head and drink with us, for life is short and brutish.”

So I drank. And maybe then we had a dance party or something… I don’t know. But at some magical, serendipitous moments, Florence + The Machine came on. And I sat back down at my computer and started working myself into a frenzy to the tune of the music… “Hiring isn’t fair, the world isnt fair…. hiring isn’t fair, and the world isn’t fair, and fuck the fact that everyone uses resumes and rejects all manners of good people even though they’re clearly a crock because typos matter 50 kajillion times more than pedigree.”

And in that slightly drunken, fevered frenzy I wrote the rest of the post. It ended up getting cut in half or more by friends who were kind enough to extract a few cogent bits from whatever it was that I produced. The writing in that post isn’t the best, but it’s ok… and it was good enough to get the payload about typos (and generally about how dumb resumes are) across clearly, which is ultimately what mattered most.

Why does wine help me write (please see the footnote before you unleash your wrath)?3 Because, for a brief hour or so, it stills the inexorable pull of self-editing and silences the voices that tell you you’re a piece of shit who can’t write worth a damn. Now, you might still be a piece of shit who can’t write worth a damn, but you’ll never become a piece of shit who can write unless you actually write.

Once the voices are quiet, you can get out whatever is in your head. It doesn’t have to make sense, it doesn’t have to be ordered or flow, and it doesn’t have to be the most important takeaway you anticipate your post will ultimately have. Whatever it is will be raw and real… and then you (and your friends or coworkers if you’re lucky) will prune the drivel and mold it into something good.

So drink your wine (or don’t drink… just do whatever gets you in a good place) and put on whatever music fills your heart with rage (or inspiration if you’re not a broken human like me), and get to it. And do it again and again, until the ritual itself is what gives you comfort and lets you produce.

But, friends, be warned: please do your data analysis sober.

 

1 The folks at Priceonomics succinctly assigned “confirmation bias” as the right term for this technique. I first heard it at a workshop they ran. I had been doing this for years but hadn’t ever heard the term before. They do great work around content marketing and have made a business out of harnessing confirmation bias and data. They’ve also written a much lengthier guide to what makes good content than this post.

2 I have no idea why I picked TDD for this example. I do not have strong feelings about TDD, and there are probably way more controversial things out there… like using JavaScript in server-side production.

3 I’m probably going to catch a lot of flack and vitriol for encouraging drinking. Look, it works for me. It doesn’t have to work for you, and it might be really bad for you in particular because of some unserendipitous mix of genetics and past decisions. So, instead of drinking, let’s use alcohol as metonymy for any number of activities that quiet the voices in the head and let you focus. I hear that among the well-adjusted, meditation is all the rage, as is physical exercise. For those on the fringe, we drink in the dark.

Diversity quotas suck. Here’s why.

A few days ago, I contributed to a roundtable discussion-style post about diversity quotas (that is, setting specific hiring targets around race and gender) on the Key Values blog. Writing my bit there was a good forcing function for exploring the issue of diversity quotas at a bit more length… and if I’m honest, this is a topic I’ve had really strong opinions about for a while but haven’t had the chance to distill. So, here goes.

I think it’s important to ask ourselves what we want to accomplish with diversity quotas in the first place. Are we trying to level the playing field for marginalized groups? To bring in the requisite diversity of thought that correlates so strongly with a better bottom line? Or to improve our optics so that when the press writes about our company’s diversity numbers, we look good? Unless diversity quotas are truly an exercise in optics, I firmly believe that, in the best case, they’re a band-aid that fails to solve deep, underlying problems with hiring and that, in the worst case, they do more harm than good by keeping us complacent about finding better solutions, and paradoxically, by undermining the very movement they’re meant to help. Instead of trying to manage outcomes by focusing on quotas, we should target root causes and create the kind of hiring process that will, by virtue of being fair and inclusive, bring about the diversity outcomes we want.

Why are quotas bad? If it’s not just about optics, and we are indeed trying to level the playing field for marginalized groups, let’s pretend for a moment that quotas work perfectly and bring us all the desired results. Even in that perfect world, we have to ask ourselves if we did the right thing. Any discussion about leveling the playing field for marginalized groups should not just be about race but should also include socioeconomic status. And age. And a myriad of other marginalized groups in tech.

We often focus on race and gender because those are relatively easy to spot. Socioeconomic status is harder because you can’t tell how someone grew up, and you can’t really ask “Hey were your parents poor?” on an application form. Age is a bit easier to spot (especially if you spent your 20s laying around in the sun like I did), but it’s illegal to ask about age in job interviews… to prevent discrimination! Surely, that’s a contradiction in terms. So, if we’re leaving out socioeconomic status and age and a whole bunch of other traits when we assign quotas, are we really leveling the playing field? Or are we creating more problems?

One of the downsides of diversity quotas is the tokenization of candidates, which often manifests as stereotype threat, one of the very things we’re trying to prevent. I can’t tell you how many times people have asked me if I thought I got into MIT because I’m a girl. That feels like shit… in large part because I DON’T KNOW if I got into MIT because I’m a girl. Stereotype threat is a real thing that very clearly makes people underperform at their jobs… and then creates a vicious cycle where the groups we’re trying to help end up being tokenized and scrutinized for underperformance caused by the very thing that’s supposed to be helping them.

So, what about diversity of thought? If you’re really going after candidates who can bring fresh perspectives to the table, their lived experience should trump their gender and ethnicity (though of course, those can correlate heavily). If you’re really after diversity of thought, then educational background/pedigree and previous work experience should weigh just as heavily. Before I became a software engineer, I spent 3 years cooking professionally. Seeing how hiring happened in a completely different field (spoiler: it’s a lot fairer) shaped my views on how hiring should be done within tech. And look, if you put a gun to my head and asked me, given absolutely identical abilities to do the job, whether I should hire a woman who came from an affluent background, aced her SATs because of access to a stellar prep program and supportive parents, went to a top school and interned at a top tech company over a man who dropped out of high school and worked a bunch of odd-jobs and taught himself to code and had the grit to end up with the requisite skills… I’ll take the man.1

But I’ll also feel shitty about it because I don’t think I should have to make choices like this in the first place. And the fact that I have to is what’s broken. In other words, quotas don’t work from either a moral perspective or from a practical one. At best, they’re a band-aid solution covering up the fact that your hiring process sucks, and the real culprit is the unspoken axiom that the way we’re doing hiring is basically fine. I wrote at length about how engineering hiring and interviewing needs to change to support diversity initiatives already, so I won’t do it here, but the gist is that fixing hiring is way harder than instituting quotas, but low-hanging fruit aren’t going to get us to a place of equal opportunity. Better screening and investments in education will. At interviewing.io, because we rely entirely on performance in anonymous technical interviews rather than resumes to surface top-performing candidates, 40% of the hires we’ve made for our customers are people from non-traditional backgrounds and underrepresented groups (and sometimes these are candidates that the same companies had previously rejected based on their resumes). The companies that we’ve hired for that have benefitted from access to these candidates have been willing to undergo the systemic process change and long-term thinking that effecting this level of change requires. We know our approach works. It’s hard, and it takes time and effort, but it works.


1There was a recent New York Times piece about how “diversity of thought” is an excuse that lets us be lazy about working to hire people from underrepresented groups. I believe that the kind of “root cause” approach we’re advocating where we invest in long-term education and create a fairer hiring process is significantly harder than doing something like quotas.

In defense of Palantir… or why the Department of Labor got the wrong man

On September 26th, the U.S. Department of Labor filed a suit against Palantir Technologies, alleging that Palantir’s engineering hiring practices discriminate against Asian applicants. I don’t have any salacious insider information about this suit, but I do have quite a bit of insight into how technical hiring works. Palantir and the DOL are really arguing over using resumes versus employee referrals to screen job candidates, when smart companies of a certain size should primarily rely on neither. In other words, rather than Palantir, standard hiring practices are really what should be on trial.

The DOL’s suit is based on the unfavorable disparity between the number of “qualified” Asian applicants for 3 engineering roles between 2010 and 2011 and the number of resulting Asian hires. Below (as taken from the full complaint), you can see the roles covered by the suit, the number of applicants and hires, and the odds, according to the DOL’s calculations, that these disparities happened by chance:

palantir-suit

My issue with this setup is simple: what does qualified actually mean? According to the complaint, “Palantir used a four-phase hiring process in which Asian applicants were routinely eliminated during the resume screen and telephone interview phases despite being as qualified as white applicants.” A four-phase hiring process is typical in tech companies and refers to a resume screen followed by a call with a recruiter, followed by a technical phone screen where the applicant writes code while another engineer observes, and concluded by a multi-hour onsite interview.

To determine basic “qualification,” the DOL relied, at least in part (and likely heavily), on the content of applicants’ resumes which, in turn, boils down to a mix of degree and work experience. Resumes are terrible predictors of engineering ability. I’ve looked at tens of thousands of resumes, and in software engineering roles, there is often very little correspondence between how someone looks on paper and whether they can actually do the job.

How did I arrive at this conclusion? I used to run technical hiring at a startup and was having a hell of a time trying to figure out which candidates to let through the resume screen. Over and over, people who looked good on paper (had worked at companies like Google, had gone to schools like MIT, and so on) crashed and burned during technical interviews, whereas candidates without pedigree often knocked it out of the park. So, I decided to examine the resumes of everyone who applied over the course of a year as well as those of past and current employees. After looking at hundreds of resumes and looking at everything from years of experience and highest degree earned to G.P.A and prestige of previous employers, it turned out that the thing that mattered most, by a huge margin, wasn’t any piece of information about the candidate. Rather it was the number of grammatical errors and typos on their resume.

Don’t believe me that screening for education and experience doesn’t work? Then consider the following experiment. A few years ago, I showed a set of anonymized resumes from my collection to 150 engineers, recruiters, and hiring managers and asked them one question: “Would you interview this candidate?” Not only did participants, across the board, fail at predicting who the strong candidates were (the odds of guessing correctly were roughly 50%, i.e. the same as flipping a coin), but, much more importantly, no one could even agree on what a strong candidate looked like in the first place.

Organizations realize that resumes are noisy and are forced to explore other, more reliable channels. In the case of Palantir and many other companies, this boils down to relying on employee referrals, and that may be the DOL’s strongest argument. According to the complaint, “…the majority of Palantir’s hires into [the three positions listed in the suit] came from an employee referral system that disproportionately excluded Asians.” Using referrals as a hiring channel is an extremely common practice, with the rationale being that it’s a reliable source of high-quality candidates. This makes sense. If resumes were reliable, then referrals wouldn’t be such a valued channel.

Despite its ubiquity, is relying on referrals grounds for a discrimination suit? Perhaps. But referrals were Palantir’s perfectly reasonable attempt to find a better screen than resumes. Palantir just didn’t go far enough in looking at other options. What if, instead of being bound to hire from the quasi-incestuous sliver of your employees’ social graph, you could have reliable, high-signal data about your entire candidate pool?

Until recently, when hiring, you had to rely on proxies like resumes to make value judgments because there simply wasn’t a good way to get at more direct, meaningful, and unbiased data about your candidates.

We now have the technology to change that. A slew of products anonymize candidate names, entirely occluding race. A whole other set of tools enables you to send relevant take-home exercises to all your applicants and automatically score their submissions, using those scores as a more indicative resume substitute. And there’s my company, interviewing.io, which I’m totally plugging right now but which also happens to be the culmination of my attempts to fix everything that’s pissed me off about hiring for years. interviewing.io matches companies with candidates based entirely on how those candidates have been doing in technical interviews up until that point. Moreover, every interview on our platform is blind — by the time you unmask with a candidate, you’ve decided whether you’re going to bring them in for an onsite, and you’ve used nothing but their interview performance to make that decision.

Whichever solution ends up being the right one, one thing is clear. It’s time to shut down outdated, proxy-based hiring practices. That doesn’t mean paying lip service to diversity initiatives. It means fundamentally rethinking how we hire, paring away every factor except whether the candidate in question can do the job well.

Any other kind of hiring practice is potentially discriminatory. But even worse, it’s inefficient and wasteful. And it’s ultimately the thing that, unlike Palantir, truly deserves our wrath.

A founder’s guide to making your first recruiting hire

Recently, a number of founder friends have asked me about how to approach their first recruiting hire, and I’ve found myself repeating the same stuff over and over again. Below are some of my most salient thoughts on the subject. Note that I’ll be talking a lot about engineering hiring because that’s what I know, but I expect a lot of this applies to other fields as well, especially ones where the demand for labor outstrips supply.

Don’t get caught up by flashy employment history; hustle trumps brands

At first glance, hiring someone who’s done recruiting for highly successful tech giants seems like a no-brainer. Google and Facebook are good at hiring great engineers, right? So why not bring in someone who’s had a hand in that success?

There are a couple of reasons why hiring straight out of the Googles and Facebooks of the world isn’t necessarily the best idea. First off, if you look at a typical recruiter’s employment history, you’re going to see a lot of short stints. Very likely this means that they were working as a contractor. While there’s nothing wrong with contract recruiting, per se, large companies often hire contract recruiters in batches, convert the best performers to full-time hires, and ditch the rest.1 That said, some of the best recruiters I know started out at Google. But I am inclined to believe they are exceptions.

The second and much more important reason not to blindly hire out of tech giants is the importance of scrappiness and hustle in this hire. If you work as a recruiter at Google, you’re basically plugged into the matrix. You have a readymade suite of tools that make it much easier to be successful. You have a database of candidates who have previously interviewed that spans a huge portion of the engineering population. Email discovery is easier. Reaching out to people is easier because you have templates that have been proven to work to rely on. And you can lean on the Google brand as a crutch. Who hasn’t been, at one point in their career, flattered by an email from a Google recruiter? As a result, if you’re sending these emails, you don’t have to go out of your way to write personal messages or to convince people that your company is cool and interesting and worth their time. You get that trust for free.

Contrast this setup with being the very first person in the recruiting org. You have no tools. You have no templates. You probably have no brand. You probably have, well, jack shit. You need someone who’s going to think critically about tooling and balance the need for tooling with a shoestring budget, especially in a space where most tooling has a price tag of at least $1K per month. You’re going to need someone whose methods are right for your particular situation rather than someone who does things because that’s just how they’ve always been done. You probably want someone who realizes that paying for a LinkedIn Recruiter seat is a huge fucking waste of money and that sourcing on LinkedIn, in general, is a black hole-level time suck. You want someone who is good at engaging with candidates independently of brand sparkle, which likely means someone who understands the value of personalization in their sourcing efforts. You want someone who compensates for your relatively unknown status with great candidate experience during your interview process. You want someone who won’t just blindly pay tens of thousands of dollars for career fair real estate because that’s just what you do, even though the only companies who get ROI on career fair attendance are ones with preexisting brands. And, apropos, you want someone who can start building a sparkly brand for you from day one because building a brand takes time. (More on brand-building in the last two sections on marketing chops and evangelism.)

Sales chops are hugely important, and you can test for those

People often ask me if having an engineering background is important for technical recruiters. My answer to that is always, “Yes, but.” Yes, it’s useful, but the main reason it’s useful is that it helps build credibility and rapport with candidates. A good salesperson can do that without all the trappings of engineering experience. To put it another way, at the end of the day, this is a sales job. Great engineers who are shitty salespeople will not do well at recruiting. Great salespeople with no engineering background will likely do well.

So, how can you test for sales aptitude? If the candidate is currently an in-house recruiter somewhere, I ask them to pitch me on the company’s product. If they’re an agency recruiter, I ask them to pitch me on one of their clients’ products. Most recruiters do a decent job of pitching the company as a good place to work, but unfortunately, many don’t have a very good understanding of what their company actually does. Given that they’re the first point of contact for candidates, it’s really important to be able to answer basic questions about product-market fit, challenges (both product and engineering), how the company makes money, how much traction there is, what the competition looks like, and so on. Moreover, a lack of interest in something this basic points to a lack of intellectual curiosity in general, and in a technical recruiter, this is a very poor sign because such a huge portion of the job is picking up new concepts and being able to talk about them intelligently to very smart people.

You want someone who can write

I was on the fence about whether to include this section because it sounds kind of obvious, but writing well is important in this role for two reasons. First off, your recruiter is likely going to be the first point of contact with candidates. And if you’re an early-ish company without much brand, correspondence with the recruiter will likely be the first time a candidate ever hears of you. So, you probably want that interaction to shine. And the other reason you want someone who cares about narrative, spelling, and grammar is that they will be the arbiter of these abilities in future recruiting hires. Enough said.

One exercise I like to have candidates for this role go through is writing mock sourcing emails to people at your company, as if they were still at their previous position. This portion of the interview process is probably the best lens into what it’s actually like to work with the candidate. In particular, because candidates are not likely to have a clear idea of what they’re pitching yet, I try to make this part of the process iterative and emphasize that I welcome any number of questions about anything, whether it’s the company’s offering, what companies my firm works with, what certain parts of the engineers’ profiles mean, or anything in between. What questions people ask, how they ask them, and how they deal with the ambiguity inherent in this assignment is part of the evaluation, as is the caliber of the research they did on each mock email recipient.

You want someone with marketing chops

I talked a bit earlier about how you probably have no brand to speak of at this point. I can’t stress enough how much easier having a brand makes hiring. Until you have one, especially in this climate, you’re going to be fighting so fucking hard for every one-off hire. If you can, you ought to put effort into branding such that you end up in the enviable position of smart people coming to you.

So why don’t early-ish companies do this across the board? Brand building is a pain in the ass, it takes time, and not all of your outbound efforts are going to be measurable, which can make it harder to get others in your org to buy in. If you can find someone who’s had even a little bit of marketing experience, they’ll be able to identify channels to get the word out, use their preexisting network to help with outsource-able tasks, and get the ball rolling on things like hosting events, which, if you’ve never done before, can be quite intimidating.

And because recruiting doesn’t live in a vacuum and needs help from other teams to send something high-signal and genuine into the world, someone with some marketing experience will likely have an easier time getting other teams to buy in and put time and resources into this endeavor, which brings me to my next point.

You want someone who can fearlessly evangelize the importance of recruiting… and get you to take an active role even when you don’t feel like it

The harsh reality is that the primary reason companies hire their first recruiter is so that hiring can be taken off the plate of the founders. It’s tempting to have the “set it and forget it” mentality in a founder’s shoes — recruiters aren’t cheap, so presumably if you pay them enough, they’ll just deal with this pesky hiring thing, and then you can get back to work. I get it. Hiring isn’t that fun, and as a founder, despite having been a recruiter myself, there are definitely days when I just want to pay someone to, for the love of god, take this bullshit off my hands so I can get back to talking to users and selling and figuring out what to build next and all sorts of other things.

But it doesn’t work that way. If you’re a founder, no one can sell your vision as well as you. And all that experience you’ve built up that makes you a subject matter expert probably also makes you pretty good at filtering candidates. You might take a lot of what’s in your head for granted, but transferring that over into someone else’s brain is going to take time and iteration. And you can never really dissociate from hiring entirely because the moment you do, the culture of “hiring is just the domain of recruiting” is going to trickle down into your culture, and over time, it will cost you the best people.

In my recruiting days, at a high level, I saw two types of hiring cultures. One had the hiring managers and teams taking an active role, participating in sourcing, tweaking interview questions to make them engaging and reflective of the work, and taking time to hang out with candidates, even if they weren’t interviewing yet. The other type had the recruiting org be largely disjoint from the teams it was hiring for. In this type of setup, team members would view recruiting as a hassle/necessary evil that took them away from their regular job, and most of the remaining trappings of the hiring process would be left in the hands of recruiters alone.

You can guess which type of company ends up with an enviable interview process, a sweet blog, cool, hiring-themed easter eggs in their code, and a wistful, pervading, nose-pressed-against-the-glass refrain of “I wish I could work there”. And you can, in turn, guess which company demands a CS degree and 10 years of [insert recent language name here] experience in their job descriptions.

Despite these realities, founders and hiring managers often forget how critical their role in hiring is because they have a ton of everyday tasks on their plates. This is why having your recruiter be a fearless evangelist is so important. This person needs to cheerfully yet insistently remind the team and especially founders (who are, after all, the ones who dictate culture) that time spent on hiring is part of their jobs. This person needs to be able to march into the CEO’s office and demand that they go and give a talk somewhere or consistently block off time on their calendar every week to send some sourcing emails. Or that they need to write some stuff somewhere on the internet such that people start to realize that their company is a thing. Marching into a CEO’s office and making demands is tough. You need a person who will do this without trepidation and who will be able to convince you, even when the sky is falling, that a few hours a week spent on hiring are a good use of your time.

In addition to these points, all the usual thought points about hiring someone who’s going to be growing a team apply here. Is this person already a strong leader? If not, can they grow into one? Are they going to be able to attract other talent to their team? Are they someone you want around, fighting alongside you in the dark, for a long time to come? And, though in an ideal world I’d choose someone with experience who also meets the criteria I’ve outlined in this guide, if ultimately faced with a choice between experience and someone green with hustle, charisma, writing ability, and smarts, I’ll choose the latter every time.


1As an aside, this process is an unfortunate side effect of employment law meant to protect contractors from being exploited. The thinking is that by capping the length of time that someone can work as a contractor, you can exert pressure on the company to turn them into full-time hires who have to be given benefits. But as with many well-intentioned, regulatory pieces of legislation, that’s not really what happens in practice. The practical takeaway, though, is that if someone is great at recruiting, they’re probably not going to have a bunch of short contracting stints.

Engineers can’t gauge their own interview performance. And that makes them harder to hire.

Note: This post is cross-posted from interviewing.io’s blog. interviewing.io is a company I founded that tries to make hiring suck less. I included it here because it seems like there’s a good amount of thematic overlap. And because there are some pretty graphs.

interviewing.io is an anonymous technical interviewing platform. We started it because resumes suck and because we believe that anyone, regardless of how they look on paper, should have the opportunity to prove their mettle. In the past few months, we’ve amassed over 600 technical interviews along with their associated data and metadata. Interview questions tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role at a top company, and interviewers typically come from a mix of larger companies like Google, Facebook, and Twitter, as well as engineering-focused startups like Asana, Mattermark, KeepSafe, and more.

Over the course of the next few posts, we’ll be sharing some { unexpected, horrifying, amusing, ultimately encouraging } things we’ve learned. In this blog’s heroic maiden voyage, we’ll be tackling people’s surprising inability to gauge their own interview performance and the very real implications this finding has for hiring.

First, a bit about setup

When an interviewer and an interviewee match on our platform, they meet in a collaborative coding environment with voice, text chat, and a whiteboard and jump right into a technical question. After each interview, people leave one another feedback, and each party can see what the other person said about them once they both submit their reviews. If both people find each other competent and pleasant, they have the option to unmask. Overall, interviewees tend to do quite well on the platform, with just under half of interviews resulting in a “yes” from the interviewer.

If you’re curious, you can see what the feedback forms look like below. As you can see, in addition to one direct yes/no question, we also ask about a few different aspects of interview performance using a 1-4 scale. We also ask interviewees some extra questions that we don’t share with their interviewers, and one of those questions is about how well they think they did. In this post, we’ll be focusing on the technical score an interviewer gives an interviewee and the interviewee’s self-assessment (both are circled below). For context, a technical score of 3 or above seems to be the rough cut-off for hirability.

Feedback form for interviewers
Feedback form for interviewers

Feedback form for interviewees
Feedback form for interviewees

Perceived versus actual performance

Below, you can see the distribution of people’s actual technical performance (as rated by their interviewers) and the distribution of their perceived performance (how they rated themselves) for the same set of interviews.1

You might notice right away that there is a little bit of disparity, but things get interesting when you plot perceived vs. actual performance for each interview. Below, is a heatmap of the data where the darker areas represent higher interview concentration. For instance, the darkest square represents interviews where both perceived and actual performance was rated as a 3. You can hover over each square to see the exact interview count (denoted by “z”).

If you run a regression on this data2, you get an R-squared of only 0.24, and once you take away the worst interviews, it drops down even further to a 0.16. For context, R-squared is a measurement of how well you can fit empirical data to some mathematical model. It’s on a scale from 0 to 1 with 0 meaning that everything is noise and 1 meaning that everything fits perfectly. In other words, even though some small positive relationship between actual and perceived performance does exist, it is not a strong, predictable correspondence.

You can also see there’s a non-trivial amount of impostor syndrome going on in the graph above, which probably comes as no surprise to anyone who’s been an engineer.

Gayle Laakmann McDowell of Cracking the Coding Interview fame has written quite a bit about how bad people are at gauging their own interview performance, and it’s something that I had noticed anecdotally when I was doing recruiting, so it was nice to see some empirical data on that front. In her writing, Gayle mentions that it’s the job of a good interviewer to make you feel like you did OK even if you bombed. I was curious about whether that’s what was going on here, but when I ran the numbers, there wasn’t any relationship between how highly an interviewer was rated overall and how off their interviewees’ self-assessments were, in one direction or the other.

Ultimately, this isn’t a big data set, and we will continue to monitor the relationship between perceived and actual performance as we host more interviews, but we did find that this relationship emerged very early on and has continued to persist with more and more interviews — R-squared has never exceeded 0.26 to date.

Why this matters for hiring

Now here’s the actionable and kind of messed up part. As you recall, during the feedback step that happens after each interview, we ask interviewees if they’d want to work with their interviewer. As it turns out, there’s a very statistically significant relationship (p < 0.0008) between whether people think they did well and whether they’d want to work with the interviewer. This means that when people think they did poorly, they may be a lot less likely to want to work with you3. And by extension, it means that in every interview cycle, some portion of interviewees are losing interest in joining your company just because they didn’t think they did well, despite the fact that they actually did.

How can one mitigate these losses? Give positive, actionable feedback immediately (or as soon as possible)! This way people don’t have time to go through the self-flagellation gauntlet that happens after a perceived poor performance, followed by the inevitable rationalization that they totally didn’t want to work there anyway.

Lastly, a quick shout-out to Statwing and Plotly for making terrific data analysis and graphing tools respectively.

1There are only 254 interviews represented here because not all interviews in our data set had comprehensive, mutual feedback. Moreover, we realize that raw scores don’t tell the whole story and will be focusing on standardization of these scores and the resulting rat’s nest in our next post. That said, though interviewer strictness does vary, we gate interviewers pretty heavily based on their background and experience, so the overall bar is high and comparable to what you’d find at a good company in the wild.

2Here we are referring to linear regression, and though we tried fitting a number of different curves to the data, they all sucked.

3In our data, people were 3 times less likely to want to work with their interviewers when they thought they did poorly.