The unvarnished, unbundled guide to hiring tools

I get a lot of questions about which hiring tools do what and how they’re different from each other, so I decided to draw an ugly, yet handy, picture (see below).

By the way, the reason this post has “unbundled” in the title is that many hiring tools, in part because we’re all on the VC funding treadmill, aspire to be more than they are and to ultimately be the one ring that rules them all, all the way from sourcing to interviewing to reference checks to onboarding to god knows what. So far, these attempts at grand unification, much like communism, have not panned out in practice. Though most tools claim to do more, most do more badly1 but will try to upsell you on how they can solve all your hiring needs. Therefore, in the picture above, I’ve chosen the primary use case for each tool, i.e. the use case that each tool has actually gotten traction for (and the use case that I believe they’re actually good at).

And last thing. I run interviewing.io, which means that my take on other tools is indiscriminately biased. But, hell, it’s my blog, and I can say whatever I want… and having been in this industry for almost 10 years as an engineer and later as both an in-house and an agency recruiter and having spent the past 5 years running a successful hiring marketplace, I have acquired my prejudices the honest way, through laboratory experience.2

All that said, a few of the choices I made in the picture probably require some explanation, so here goes…

I thought HackerRank also had an interview tool. 
Yep, they do, it’s called CodePair. However, last I checked, in order to use it, you also had to buy other things, i.e. you can’t just use CodePair for technical interviews without buying in to the broader HackerRank ecosystem. And though CoderPad isn’t paying me for this, I think it’s a superior tool on many levels (and the one we actively chose to use inside of interviewing.io, forsaking all others). CodeSignal also has their own tool and Codility does as well, but that’s kind of the point of this post: tools unbundled. I’m listing the tools that have the differentiator in question as their core competency, not an add-on on some enterprise checklist.

Why is Triplebyte in the middle of the sourcing section?
Triplebyte recently pivoted to a new model. Instead of interviewing their candidates before sending them to customers, they now rely on a quiz, the results of which they use to annotate candidate profiles that recruiters can source from. Before that, they had their own interviewers conduct technical interviews with candidates and also had their talent managers match candidates to companies.

I thought Hired, AngelList, and LinkedIn had some kind of skills assessment/technical vetting?
They do, but last I checked it was an asynchronous test (rather than a human interviewer). In my opinion, asynchronous skills testing on these platforms has some value, but it’s quite limiting for a few reasons:

  • Skills testing for candidates is optional on these platforms, which means that 1) not all profiles are vetted and 2) you get a good amount of selection bias for candidates without leverage taking them, e.g. juniors and people who need new visa sponsorship
  • Much easier to cheat
  • And of course asynchronous tests are lower fidelity than human interviewers (or at least the ones I’ve seen to date… but I want to be proven wrong)

I thought AngelList had interested candidates come to you?
They do. But like any jobs board, it’ll be noisy and probably full of candidates who don’t have much leverage, e.g. juniors/bootcamp grads and people requiring visa sponsorship.

Should I use take-home tests in my process?
As with many of my answers, it’s a matter of leverage. Candidates who have lots of options probably won’t spend time on take-homes. Candidates who don’t, either because they’re junior or because they don’t get a lot of recruiter outreach for other reasons, will.

Why are you so obsessed with leverage?
Because market forces rule everything around me.

Hey, if you do sourcing, why does your company literally have “interviewing” in the name?
The way we get candidates into our ecosystem is by offering them mock interviews. Then top performers from our practice pool can choose to use us for their job search. The name originally was meant to highlight the practice offering, but yeah, it’s confusing.

Is interviewing.io a good way to source candidates?
Yes. Yes it is.

  1. Not all… some do a decent job of this. []
  2. The bit about acquiring your prejudices the honest way is one of my favorite quotes, and credit goes to James Roberge, electrical engineering professor at MIT. []

Thinking about attending a coding bootcamp? Ask them these questions first.

I get a lot of emails from prospective career changers who’ve read my stuff (especially the one in Forbes where I went off about how MS degrees in computer science are snake oil) asking for advice about breaking into software engineering.

Many of them ask about bootcamps. Almost all are surprised by the harsh reality that, though bootcamps can be a perfectly useful and valid start to your career change journey, they are not the magical panacea that they purport to be… and that a true career change is going to take a lot of blood, sweat, and autodidactic tears after graduation for most people.

Why do I believe this? A few reasons. First, I did a Twitter straw poll about post-bootcamp outcomes a while ago, and it was pretty grim:

Post-bootcamp employment Twitter poll

Of course, Twitter straw polls aren’t science or even really data. What *is* data is how bootcamp grads perform in technical interviews. At interviewing.io, we’ve run pilots with most reputable programs at one point or another, hoping that we’d be able to place their students. The sad truth is that almost every current bootcamp student who participated in interviewing.io’s mock interview pool failed. To be fair, our audience is usually senior engineers, but interviewers see candidate seniority and do adjust question difficulty. Despite that, the outcomes were not encouraging.

It’s not that the students don’t have potential. It’s that every program I’ve seen doesn’t dedicate nearly enough time in the curriculum to interview prep. Technical interviews are hard and scary for everyone, even FAANG engineers with 5+ years of experience. The idea that you can teach someone who’s never coded before big O notation AND get them proficient at writing efficient code and articulating trade-offs in 2 weeks (this is how long I’ve heard the most reputable programs spend on technical interviewing) is laughable.

So, without further ado, for those of you considering attending a bootcamp, these are the questions you should ask.

  • Do you ask people to leave before graduation if they’re struggling?
  • Does your job placement rate include just graduates?
  • Is your job placement rate an average of all cohorts or just a few/the best ones?
  • What is considered “getting a job”? Does it include people who are working on contract or part-time? Does it include people who are now TAs or instructors at the bootcamp? Does it include people who found work doing something other than software engineering?
  • What is the median salary of grads (not just the average)? And do those numbers include part-time work/other job titles or just full-time software engineers?
  • For the students who ended up at FAANG, what were their backgrounds? Were they electrical engineering majors? Physics students? What portion of people who ended up at FAANG didn’t know how to code before doing the bootcamp?
  • What portion of your curriculum is dedicated to technical interview prep?
  • How many mock interviews with industry engineers (not peers or instructors) will I get as part of the course?

Let me know if these questions help you in your adventures, brave heroes. Another resource you can use is the CIRR — they’ve created a standardized request form that bootcamps can use to report outcomes, and you can see outcomes from participating bootcamps for H2 (second half) of 2018. It’s not everything, but it’s a start.

How to write stuff that gets on the front page of Hacker News

Hi. My name is Aline, leeny on Hacker News. My team at interviewing.io and I have written a lot of stuff, and most of it has been on the Hacker News front page — of the 30 blog posts I (and later we) have submitted, 27 have been on the front page, and over the last few years, our writing has been read by millions of people.

We wrote most of these… though some are just great content I underhandedly beat the author to submitting and feasted on the ill-gotten karma.
We wrote most of these… though some are just great content I underhandedly beat the author to submitting and feasted on the ill-gotten karma

Though the first few things I ever wrote were driven by a feckless mix of intuition, terror, and rage (I write a lot about how engineering hiring is unfair and inefficient and broken), over time I began to notice some common threads among my most successful posts, and these realizations have made it easier for me to weep less, write more, and to pass on the learnings to my team and create a somewhat repeatable system of content generation.

I’m not altogether unaware that the title of this post has a whiff of hubris about it and merits some amount of disclaiming. I don’t claim that my way of writing is the only way, nor do I claim that it’s going to work forever. Every time I write something and submit it, I ask myself, “Is this it? Is this the one where I find out the formula no longer works?” It’s terrifying and it’s fickle, and I’m beyond grateful to the HN community for reading interviewing.io’s stuff as long as it has.

What makes content sticky?

This list isn’t exhaustive, and surely there are other strategies to crafting sticky content, but I can only talk about the two strategies that have worked well for us. The most effective strategy, in my experience, is to tap into a controversial point that your audience already holds and then back it up with data that you have to confirm their suspicions.

The second strategy is to share something uniquely helpful with your audience that makes them better at some meaningful aspect of their lives.

I use both of these techniques repeatably, but in my experience, the “controversial with data” technique is way more effective than being “helpful”. More on that later, but first, here’s how to execute on both.

Confirmation bias, cocktail parties, and data

What is confirmation bias? It’s why people enjoy saying “I told you so!” It’s the tendency to interpret new information in a way that reinforces existing beliefs… preferably controversial beliefs that your audience suspects are true (and are probably frustrated about) but can’t definitively back up.

Confirmation bias comic

In our case, it was a bunch of aspects of status-quo hiring, stuff like: resumes suck, LinkedIn endorsements are dumb, technical interviews are being done badly and the results aren’t deterministic, and so on and so forth.

So, you take that kernel of frustration, and then you put some data firepower behind it. Find the data that you have that no one else has, and use it to prove that those controversial beliefs do indeed hold water… lighting up the same parts of our brain that makes us fall prey to confirmation bias, in other words.1

Another way to say it is that the best content marketing, in my mind, is the stuff that makes people smugly want to repeat it at cocktail parties. I don’t say that with judgment or derision. I derive much of the pleasure in my short, brutish life from being smug and right. It’s not something I’m necessarily proud of, but it’s true.

So, if you have something controversial to say, why does having data matter? Because no one cares with Aline “Dipshit” Lerner thinks about hiring. You and your readers might hold all sorts of controversial opinions about the world, but until you’re really famous, your opinion doesn’t matter more than anyone else’s. But data (especially if it’s proprietary) can elevate an anecdote to gospel. Data provides you with the credibility that nothing else can at this stage — no matter who you are, if you have compelling data, engineers will listen.

The one thing you really have in your favor in these situations is that, because no one knows who you are, the more sophisticated your audience, the more likely they are to take your good content seriously. You don’t have a brand, you don’t have a comms team or a brand to protect, all you have is the unvarnished truth from the trenches.

With the attributes above in mind, think about what cool stuff you’ve discovered by virtue of working at your company. Do you have a data set you can mine for unique insights? Does having operated in your space at depth put you in a position where you can confirm or deny controversial assumptions about some aspect of human nature or our daily lives? If you’re a founder, what unique insight do you have that made you start this company in the first place? If you’re an employee, what part of the mission/vision/execution really resonated with you, at the exclusion of other options you had in the same space? Then, once you’ve identified the right sticky tidbit, it’s up to you to distill it into plain English and then back it up with data… which in practice means some very clear (and maybe pretty… though clear trumps pretty) graphs or visualizations.

It’s tempting to fall into the trap of creating content that tells rather than shows, and the myriad blog posts out there to the tune of “here’s how we run meetings” or “here’s our product process” are proof of that. Typically, posts like this don’t do very well because frankly, no one cares about how you run your processes until you’re a household name that others are trying to emulate. One exception to this rule is if you want to highlight something polarizing you’re doing. In that case, feel free to shout that directly to the world so it’s loud and clear and makes its way most directly to the fringe community you’re targeting. In other words, if you’re really gung ho about TDD, you can write a blog post called “Why we ALWAYS use TDD with no exceptions”, and it’ll do great because of confirmation bias among TDD evangelists, probably the very people you want to target.2

Being helpful

Though, in my experience, the controversial cocktail party technique is the most effective, you can’t always bust out controversy at the drop of a hat, and you might have plenty of useful, interesting things to say that don’t tickle our desire to be right. If you can’t be controversial, then be helpful. Note that “helpful” means giving your readers specific, actionable advice about things that have a big impact on their lives (love, work, sex, health) rather than general worldviews on these topics.

Also, note that being helpful is not nearly as effective as being controversial. Woe is us.

Controversy is more effective than being helpful… here’s the data

And, there’s this post, which has had one maybe controversial idea so far (namely that making your readers feel smug is what gets you eyeballs and clicks) but no data to speak of. To right that gross injustice, I looked at all the posts my team and I have contributed to Hacker News over the years and tagged them with 3 attributes: whether they were controversial or helpful and whether they had data.

Below is a graph showing the average number of HN upvotes per post type. I looked at whether a given post was helpful or controversial. And for each type, I broke apart posts into 2 subcategories: whether they had data/graphs or not.

hn blog posts - plot

I refrained from doing any significance testing because teasing apart independence here would have been an unprincipled nightmare. For instance, most of our helpful posts didn’t have data, so the relationship between whether the post was helpful or had data wasn’t entirely independent. That said, there’s probably still something useful to be learned from just looking at the means of upvotes for each category, namely that if you don’t have data, then write helpful stuff. It’ll do OK. If you do have data, controversy reigns supreme.

Examples of good content marketing

It’s easy to wax general. I don’t think this post is going to be helpful without some examples. Here are some examples of stellar writing that fall into the categories above.

Examples of controversial, data-driven content marketing

For me, the canonical, original, great data-driven posts all live in the OKCupid blog served as the lodestar of what good blogging could be. These days, the original posts are buried in a cave where no site nav breadcrumbs will go (they’ve been replaced by a sad facsimile or what they used to be, utterly inoffensive, bland, and humorless), and I had to google to find them. But, you know, gems like this:

  • OKCupid – The lies people tell in online dating where the controversial idea is that people really do lie a lot in online dating (this was controversial in 2010 back when it was socially appropriate to be embarrassed that you were dating online)
  • OKCupid – The case for an older woman where the controversial idea is that women over 30 are viable (very sad for me that this is controversial)
  • Uber early blog – Rides of glory where the controversial idea is that you can guess which of your users are having sex based on their ride usage data (this post was what introduced me to Uber and I expect helped meaningfully build their brand… there’s a reason it’s no longer up and I had to link to the web archive)
  • Priceonomics – The San Francisco drug economy where the controversial idea is that it’s very lucrative (and not very hard) to be a drug dealer in San Francisco, and many users are in tech

But… do posts HAVE to hit a nerve and make some portion of the population uncomfortable? Though those tend to be the most fun, this isn’t necessary to produce great content. Hiring is typically a much more tame subject than sex, but it’s possible to write controversial things about it — I’d be remiss if I didn’t link you to some of the things we’ve written. Here are a few favorites that exemplify our take on controversial, data-driven blogging:

Examples of helpful content marketing

As we discussed earlier, not all good content marketing falls into this controversial-anecdote-backed-up-with-data format. Some successful posts just have really useful content that make you better at some meaningful part of your life.

And of course, a few of ours:

So, we have some theory about content marketing, and we have some practical examples. What now? To wit, here’s one last controversial piece for you: drinking a little might make you a more prolific writer.

How to actually make yourself write

The “cocktail party anecdote backed by data” premise is reliable and repeatable and it works, and I expect as you read this, you probably have some ideas about topics you could write about. Ideas are the easy part, however. How do you actually summon the wherewithal to write?

Before he was a hipster text editor, Ernest Hemingway was a churlish, surly alcoholic writer with an allergy to adverbs who coined the phrase “Write drunk, edit sober” and changed my life and liver forever.

When I was maybe halfway through writing Lessons from a year’s worth of hiring data (the first successful post I ever had), I hit what felt like an insurmountable wall. I had already spent months manually counting typos in resumes, had run a logistic regression and a bunch of statistical tests, and was pretty sure that I was onto something — the number of typos and grammatical errors in one’s resume appeared to matter way more than where someone worked or went to school. And there were other surprising findings, too. But when I tried to get the words out, they wouldn’t come. The typos thing was super cool, right? And surely a better, more competent human would do that finding justice when writing about it. In my hands, the work read like an insipid, stilted school assignment. I drew the blinds and sort of curled up in a ball on the ground for I don’t know how long… eventually my ex-husband and his friend who was visiting came home, a few beers in, and peeled me off the floor.

I don’t remember what the two of them said to me exactly, but my brain put it away in memory as something along the lines of, “Stop being dramatic and get out of your head and drink with us, for life is short and brutish.”

So I drank. And maybe then we had a dance party or something… I don’t know. But at some magical, serendipitous moments, Florence + The Machine came on. And I sat back down at my computer and started working myself into a frenzy to the tune of the music… “Hiring isn’t fair, the world isnt fair…. hiring isn’t fair, and the world isn’t fair, and fuck the fact that everyone uses resumes and rejects all manners of good people even though they’re clearly a crock because typos matter 50 kajillion times more than pedigree.”

And in that slightly drunken, fevered frenzy I wrote the rest of the post. It ended up getting cut in half or more by friends who were kind enough to extract a few cogent bits from whatever it was that I produced. The writing in that post isn’t the best, but it’s ok… and it was good enough to get the payload about typos (and generally about how dumb resumes are) across clearly, which is ultimately what mattered most.

Why does wine help me write (please see the footnote before you unleash your wrath)?3 Because, for a brief hour or so, it stills the inexorable pull of self-editing and silences the voices that tell you you’re a piece of shit who can’t write worth a damn. Now, you might still be a piece of shit who can’t write worth a damn, but you’ll never become a piece of shit who can write unless you actually write.

Once the voices are quiet, you can get out whatever is in your head. It doesn’t have to make sense, it doesn’t have to be ordered or flow, and it doesn’t have to be the most important takeaway you anticipate your post will ultimately have. Whatever it is will be raw and real… and then you (and your friends or coworkers if you’re lucky) will prune the drivel and mold it into something good.

So drink your wine (or don’t drink… just do whatever gets you in a good place) and put on whatever music fills your heart with rage (or inspiration if you’re not a broken human like me), and get to it. And do it again and again, until the ritual itself is what gives you comfort and lets you produce.

But, friends, be warned: please do your data analysis sober.


1 The folks at Priceonomics succinctly assigned “confirmation bias” as the right term for this technique. I first heard it at a workshop they ran. I had been doing this for years but hadn’t ever heard the term before. They do great work around content marketing and have made a business out of harnessing confirmation bias and data. They’ve also written a much lengthier guide to what makes good content than this post.

2 I have no idea why I picked TDD for this example. I do not have strong feelings about TDD, and there are probably way more controversial things out there… like using JavaScript in server-side production.

3 I’m probably going to catch a lot of flack and vitriol for encouraging drinking. Look, it works for me. It doesn’t have to work for you, and it might be really bad for you in particular because of some unserendipitous mix of genetics and past decisions. So, instead of drinking, let’s use alcohol as metonymy for any number of activities that quiet the voices in the head and let you focus. I hear that among the well-adjusted, meditation is all the rage, as is physical exercise. For those on the fringe, we drink in the dark.

Diversity quotas suck. Here’s why.

A few days ago, I contributed to a roundtable discussion-style post about diversity quotas (that is, setting specific hiring targets around race and gender) on the Key Values blog. Writing my bit there was a good forcing function for exploring the issue of diversity quotas at a bit more length… and if I’m honest, this is a topic I’ve had really strong opinions about for a while but haven’t had the chance to distill. So, here goes.

I think it’s important to ask ourselves what we want to accomplish with diversity quotas in the first place. Are we trying to level the playing field for marginalized groups? To bring in the requisite diversity of thought that correlates so strongly with a better bottom line? Or to improve our optics so that when the press writes about our company’s diversity numbers, we look good? Unless diversity quotas are truly an exercise in optics, I firmly believe that, in the best case, they’re a band-aid that fails to solve deep, underlying problems with hiring and that, in the worst case, they do more harm than good by keeping us complacent about finding better solutions, and paradoxically, by undermining the very movement they’re meant to help. Instead of trying to manage outcomes by focusing on quotas, we should target root causes and create the kind of hiring process that will, by virtue of being fair and inclusive, bring about the diversity outcomes we want.

Why are quotas bad? If it’s not just about optics, and we are indeed trying to level the playing field for marginalized groups, let’s pretend for a moment that quotas work perfectly and bring us all the desired results. Even in that perfect world, we have to ask ourselves if we did the right thing. Any discussion about leveling the playing field for marginalized groups should not just be about race but should also include socioeconomic status. And age. And a myriad of other marginalized groups in tech.

We often focus on race and gender because those are relatively easy to spot. Socioeconomic status is harder because you can’t tell how someone grew up, and you can’t really ask “Hey were your parents poor?” on an application form. Age is a bit easier to spot (especially if you spent your 20s laying around in the sun like I did), but it’s illegal to ask about age in job interviews… to prevent discrimination! Surely, that’s a contradiction in terms. So, if we’re leaving out socioeconomic status and age and a whole bunch of other traits when we assign quotas, are we really leveling the playing field? Or are we creating more problems?

One of the downsides of diversity quotas is the tokenization of candidates, which often manifests as stereotype threat, one of the very things we’re trying to prevent. I can’t tell you how many times people have asked me if I thought I got into MIT because I’m a girl. That feels like shit… in large part because I DON’T KNOW if I got into MIT because I’m a girl. Stereotype threat is a real thing that very clearly makes people underperform at their jobs… and then creates a vicious cycle where the groups we’re trying to help end up being tokenized and scrutinized for underperformance caused by the very thing that’s supposed to be helping them.

So, what about diversity of thought? If you’re really going after candidates who can bring fresh perspectives to the table, their lived experience should trump their gender and ethnicity (though of course, those can correlate heavily). If you’re really after diversity of thought, then educational background/pedigree and previous work experience should weigh just as heavily. Before I became a software engineer, I spent 3 years cooking professionally. Seeing how hiring happened in a completely different field (spoiler: it’s a lot fairer) shaped my views on how hiring should be done within tech. And look, if you put a gun to my head and asked me, given absolutely identical abilities to do the job, whether I should hire a woman who came from an affluent background, aced her SATs because of access to a stellar prep program and supportive parents, went to a top school and interned at a top tech company over a man who dropped out of high school and worked a bunch of odd-jobs and taught himself to code and had the grit to end up with the requisite skills… I’ll take the man.1

But I’ll also feel shitty about it because I don’t think I should have to make choices like this in the first place. And the fact that I have to is what’s broken. In other words, quotas don’t work from either a moral perspective or from a practical one. At best, they’re a band-aid solution covering up the fact that your hiring process sucks, and the real culprit is the unspoken axiom that the way we’re doing hiring is basically fine. I wrote at length about how engineering hiring and interviewing needs to change to support diversity initiatives already, so I won’t do it here, but the gist is that fixing hiring is way harder than instituting quotas, but low-hanging fruit aren’t going to get us to a place of equal opportunity. Better screening and investments in education will. At interviewing.io, because we rely entirely on performance in anonymous technical interviews rather than resumes to surface top-performing candidates, 40% of the hires we’ve made for our customers are people from non-traditional backgrounds and underrepresented groups (and sometimes these are candidates that the same companies had previously rejected based on their resumes). The companies that we’ve hired for that have benefitted from access to these candidates have been willing to undergo the systemic process change and long-term thinking that effecting this level of change requires. We know our approach works. It’s hard, and it takes time and effort, but it works.


1There was a recent New York Times piece about how “diversity of thought” is an excuse that lets us be lazy about working to hire people from underrepresented groups. I believe that the kind of “root cause” approach we’re advocating where we invest in long-term education and create a fairer hiring process is significantly harder than doing something like quotas.

In defense of Palantir… or why the Department of Labor got the wrong man

On September 26th, the U.S. Department of Labor filed a suit against Palantir Technologies, alleging that Palantir’s engineering hiring practices discriminate against Asian applicants. I don’t have any salacious insider information about this suit, but I do have quite a bit of insight into how technical hiring works. Palantir and the DOL are really arguing over using resumes versus employee referrals to screen job candidates, when smart companies of a certain size should primarily rely on neither. In other words, rather than Palantir, standard hiring practices are really what should be on trial.

The DOL’s suit is based on the unfavorable disparity between the number of “qualified” Asian applicants for 3 engineering roles between 2010 and 2011 and the number of resulting Asian hires. Below (as taken from the full complaint), you can see the roles covered by the suit, the number of applicants and hires, and the odds, according to the DOL’s calculations, that these disparities happened by chance:

palantir-suit

My issue with this setup is simple: what does qualified actually mean? According to the complaint, “Palantir used a four-phase hiring process in which Asian applicants were routinely eliminated during the resume screen and telephone interview phases despite being as qualified as white applicants.” A four-phase hiring process is typical in tech companies and refers to a resume screen followed by a call with a recruiter, followed by a technical phone screen where the applicant writes code while another engineer observes, and concluded by a multi-hour onsite interview.

To determine basic “qualification,” the DOL relied, at least in part (and likely heavily), on the content of applicants’ resumes which, in turn, boils down to a mix of degree and work experience. Resumes are terrible predictors of engineering ability. I’ve looked at tens of thousands of resumes, and in software engineering roles, there is often very little correspondence between how someone looks on paper and whether they can actually do the job.

How did I arrive at this conclusion? I used to run technical hiring at a startup and was having a hell of a time trying to figure out which candidates to let through the resume screen. Over and over, people who looked good on paper (had worked at companies like Google, had gone to schools like MIT, and so on) crashed and burned during technical interviews, whereas candidates without pedigree often knocked it out of the park. So, I decided to examine the resumes of everyone who applied over the course of a year as well as those of past and current employees. After looking at hundreds of resumes and looking at everything from years of experience and highest degree earned to G.P.A and prestige of previous employers, it turned out that the thing that mattered most, by a huge margin, wasn’t any piece of information about the candidate. Rather it was the number of grammatical errors and typos on their resume.

Don’t believe me that screening for education and experience doesn’t work? Then consider the following experiment. A few years ago, I showed a set of anonymized resumes from my collection to 150 engineers, recruiters, and hiring managers and asked them one question: “Would you interview this candidate?” Not only did participants, across the board, fail at predicting who the strong candidates were (the odds of guessing correctly were roughly 50%, i.e. the same as flipping a coin), but, much more importantly, no one could even agree on what a strong candidate looked like in the first place.

Organizations realize that resumes are noisy and are forced to explore other, more reliable channels. In the case of Palantir and many other companies, this boils down to relying on employee referrals, and that may be the DOL’s strongest argument. According to the complaint, “…the majority of Palantir’s hires into [the three positions listed in the suit] came from an employee referral system that disproportionately excluded Asians.” Using referrals as a hiring channel is an extremely common practice, with the rationale being that it’s a reliable source of high-quality candidates. This makes sense. If resumes were reliable, then referrals wouldn’t be such a valued channel.

Despite its ubiquity, is relying on referrals grounds for a discrimination suit? Perhaps. But referrals were Palantir’s perfectly reasonable attempt to find a better screen than resumes. Palantir just didn’t go far enough in looking at other options. What if, instead of being bound to hire from the quasi-incestuous sliver of your employees’ social graph, you could have reliable, high-signal data about your entire candidate pool?

Until recently, when hiring, you had to rely on proxies like resumes to make value judgments because there simply wasn’t a good way to get at more direct, meaningful, and unbiased data about your candidates.

We now have the technology to change that. A slew of products anonymize candidate names, entirely occluding race. A whole other set of tools enables you to send relevant take-home exercises to all your applicants and automatically score their submissions, using those scores as a more indicative resume substitute. And there’s my company, interviewing.io, which I’m totally plugging right now but which also happens to be the culmination of my attempts to fix everything that’s pissed me off about hiring for years. interviewing.io matches companies with candidates based entirely on how those candidates have been doing in technical interviews up until that point. Moreover, every interview on our platform is blind — by the time you unmask with a candidate, you’ve decided whether you’re going to bring them in for an onsite, and you’ve used nothing but their interview performance to make that decision.

Whichever solution ends up being the right one, one thing is clear. It’s time to shut down outdated, proxy-based hiring practices. That doesn’t mean paying lip service to diversity initiatives. It means fundamentally rethinking how we hire, paring away every factor except whether the candidate in question can do the job well.

Any other kind of hiring practice is potentially discriminatory. But even worse, it’s inefficient and wasteful. And it’s ultimately the thing that, unlike Palantir, truly deserves our wrath.