Review of CoderPad

Note: This post was adapted from an review I originally wrote on Quora.

I recently found out about CoderPad, a collaborative coding tool that lets you run your code as you go and is particularly handy for technical interviews. Here’s what I thought of it.

Interviewing is hard

I’ve been on both sides of the table for a fair amount of technical interviews—from phone screens and live coding rounds to in-person whiteboard sessions. For both the interviewee and the interviewer, the live coding round can be particularly draining for a variety of reasons.

As an interviewee, you’re faced with a variety of stumbling blocks during this round. For one, you’re working with an IDE or editor that’s likely an inferior version of what you’re used to. In a real-life situation, it’s also unlikely that you would write an entire function or class without testing it every so often. As a result, the cycles spent on working in an unfamiliar environment and without your usual testing routine can detract from what matters: showing what you can do.

As an interviewer, it’s a balancing act: staying engaged while a candidate muddles through a problem, keeping the candidate from straying too far down the wrong path, and stopping yourself from spoon-feeding them solutions. You’re simultaneously tracking their process—namely how long it took the candidate to to come up with a game plan, when they turned that game plan into a shitty solution, and when they took the clumsy solution and made it elegant. And unless you want to look like a n00b, you’ll have to be able to catch new bugs and evaluate creative solutions effectively.

Coding interviews are, at their best, a proxy for actual on-the-job aptitude. At their worst, they’re poor amalgams of real work environments where you’re stripped of the ability to syntax-highlight and run your damn code.

So, yeah, interviewing is hard.

The other guys

To address some of these issues, it helps to have a tool that can do some of the heavy lifting for you.

One of these tools is Google docs, which many companies still use to conduct live coding rounds. On the upside, Google docs can be extremely versatile and the ability to draw can come in handy if part of the coding round is conceptual or high level. Working against it: lack of indentation, line numbers, and syntax highlighting. Oh, and, you can’t run your code.

A big improvement over that is a tool called Collabedit. Collabedit has a slick UI and provides support for nearly every language you’d need. However, candidates still have to code blind—it can’t run code either.

CoderPad

Enter CoderPad. CoderPad is a collaborative editor with REPL built in. It’s not the first product to feature collaborative coding, and it’s not the first product to feature live REPL, but it is the first product I’ve seen that utilizes these two elements really, really well.

In other words, CoderPad allows both the interviewer and interviewee to run code as it’s being written.

Screen Shot 2013-08-14 at 9.39.41 PM

CoderPad supports a number of interpreted and compiled languages—which is pretty awesome. In addition to more closely mimicking how people actually work, it takes the heat off of the interviewer a bit so he can focus on whether the candidate is a good fit.

Cool features include:

  • Nice aesthetics: syntax highlighting, line numbers, indentation
  • Great language coverage for both compiled and interpreted languages including: JavaScript, Python, Ruby, Java, Scala, C/C++, and Go
  • Really beautiful/slick UI
  • Ability to include as many collaborators as you want
  • Playback feature so you can see how people got there, rather than just the end code
  • Reasonable pricing scheme

Some limitations/nice-to-haves:

  • Ability to add timestamps as the candidate works so you can track progression
  • Ability to unshare code with candidate after the interview is over
  • Faster compile times (interpreter is really fast)

Despite these minor limitations, as far as I know, there isn’t another collaborative coding tool with live REPL out there that approaches CoderPad’s level of polish and utility. You should give it a spin next time you’re interviewing someone.

Lessons from a year’s worth of hiring data

I ran technical recruiting at TrialPay for a year before going off to start my own agency. Because I used to be an engineer, one part of my job was conducting first-round technical interviews, and between January 2012 and January 2013, I interviewed roughly 300 people for our back-end/full-stack engineer position.

TrialPay was awesome and gave me a lot of freedom, so I was able to use my intuition about whom to interview. As a result, candidates ranged from self-taught college dropouts or associate’s degree holders to PhD holders, ACM winners, MIT/Harvard/Stanford/Caltech students, and Microsoft, Amazon, Facebook, and Google interns and employees with a lot of people in between.

While interviewing such a wide cross section of people, I realized that I had a golden opportunity to test some of the prevalent folk wisdom about hiring. The results were pretty surprising, so I thought it would be cool to share them. Here’s what I found:

And the least surprising thing that I was able to confirm was that:

Of course, a data set of size 300 is a pittance, and I’m a far cry from a data scientist. Most of the statistics here is done with the help of Statwing and with Wikipedia as a crutch. With the advent of more data and more rigorous analysis, perhaps these conclusions will be proven untrue. But, you gotta start somewhere.

Why any of this matters

In the status quo, most companies don’t run exhaustive analyses of hiring data, and the ones that do keep it closely guarded and only share vague generalities with the public. As a result, a certain mysticism persists in hiring, and great engineers who don’t fit in “the mold” end up getting cut before another engineer has the chance to see their work.

Why has a pedigree become such a big deal in an industry that’s supposed to be a meritocracy? At the heart of the matter is scarcity of resources. When a company gets to be a certain size, hiring managers don’t have the bandwidth to look over every resume and treat every applicant like a unique and beautiful snowflake. As a result, the people doing initial resume filtering are not engineers. Engineers are expensive and have better things to do than read resumes all day. Enter recruiters or HR people. As soon as you get someone who’s never been an engineer making hiring decisions, you need to set up proxies for aptitude. Because these proxies need to be easily detectable, things like a CS degree from a top school become paramount.

Bemoaning that non-technical people are the first to filter resumes is silly because it’s not going to change. What can change, however, is how they do the filtering. We need to start thinking analytically about these things, and I hope that publishing this data is a step in the right direction.

Method

To sort facts from folk wisdom, I isolated some features that were universal among resumes and would be easy to spot by technical and non-technical people alike and then ran statistical significance tests on them. My goal was to determine which features were the strongest signals of success, which I defined as getting an offer. I ran this analysis on people whom we decided to interview rather than on every applicant; roughly out 9 out of 10 applicants were screened out before the first round. The motivation there was to gain some insight into what separates decent candidates from great ones, which is a much harder question than what separates poor candidates from great ones.

Certainly there will be some sampling bias at play here, as I only looked at people who chose to apply to TrialPay specifically, but I’m hoping that TrialPay’s experience could be a stand-in for any number of startups that enjoy some renown in their specific fields but are not known globally. It also bears mentioning that this is a study into what resume attributes are significant when it comes to getting hired rather than when it comes to on-the-job performance.

Here are the features I chose to focus on (in no particular order):

  • BS in Computer Science from a top school (as determined by U.S. News and World Report)
  • Number of grammatical errors, spelling errors, and syntactic inconsistencies
  • Frequency of buzzwords (programming languages, frameworks, OSes, software packages, etc.)
  • How easy it is to tell what someone did at each of their jobs
  • Highest degree earned
  • Resume length
  • Presence of personal projects
  • Work experience in a top company
  • Undergraduate GPA

TrialPay’s hiring bar and interview process

Before I share the actual results, a quick word about context is in order. TrialPay’s hiring standards are quite high. We ended up interviewing roughly 1 in 10 people that applied. Of those, after several rounds of interviewing (generally a phone screen followed by a live coding round followed by onsite), we extended offers to roughly 1 in 50, for an ultimate offer rate of 1 in 500. The interview process is pretty standard, though the company shies away from asking puzzle questions that depend on some amount of luck/clicking to get the correct answer. Instead, they prefer problems that gradually build on themselves and open-ended design and architecture questions. For a bit more about what TrialPay’s interview process (used to) look like, check out Interviewing at TrialPay 101.

The results

Now, here’s what I discovered. The bar height represents effect size. Every feature with a bar was statistically significant. These results were quite surprising, and I will try to explain and provide more info about some of the more interesting stuff I found.

The most significant feature by far was the presence of typos, grammatical errors, or syntactic inconsistencies.

Errors I counted included everything from classic transgressions like mixing up “its” and “it’s” to typos and bad comma usage. In the figure below, I’ve created a fictional resume snippet to highlight some of the more common errors.

errors-visual-annotated

This particular result was especially encouraging because it’s something that can be spotted by HR people as well as engineers. When I surveyed 30 hiring managers about which resume attributes they thought were most important, however, no one ranked number of errors highest. Presumably, hiring managers don’t think that this attribute is that important for a couple of reasons: (1) resumes that are rife with mistakes get screened out before even getting to them and (2) people almost expect engineers to be a bit careless with stuff like spelling and grammar. With respect to the first point, keep in mind that the resumes in this analysis were only of people whom we decided to interview. With respect to the 2nd point, namely that engineers shouldn’t be held to the same writing standards as people in more humanities-oriented fields, I give you my next chart. Below is a breakdown of how resumes that ultimately led to an offer stacked up against those that didn’t. (Here, I’m showing the absolute number of errors, but when I ran the numbers against number of errors adjusted for resume length, the results were virtually identical.)

If you want to play with these histograms, just click on the image, and an interactive version will pop up in a separate window.

errorshistogram

As you can see, the distributions look quite different between the group of people who got offers and those that didn’t. Moreover, about 87% of people who got offers made 2 or fewer mistakes.

In startup situations, not only are good written communication skills extremely important (a lot of heavy lifting and decision making happens over email), but I have anecdotally found that being able to write well tends to correlate very strongly with whether a candidate is good at more analytical tasks. Not submitting a resume rife with errors is a sign that the candidate has strong attention to detail which is an invaluable skill when it comes to coding, where there are often all manners of funky edge cases and where you’re regularly being called upon to review others’ code and help them find obscure errors that they can’t seem to locate because they’ve been staring at the same 10 lines of code for the last 2 hours.

It’s also important to note that a resume isn’t something you write on the spot. Rather, it’s a document that you have every opportunity to improve. You should have at least 2 people proofread your resume before submitting it. When you do submit, you’re essentially saying, “This is everything I have done. This is what I’m proud of. This is the best I can do.” So make sure that that is actually true, and don’t look stupid by accident.

Top company

No surprises here. The only surprise is that this attribute wasn’t more significant. Though I’m generally not too excited by judging someone on pedigree, having been able to hold down a demanding job at a competitive employer shows that you can actually, you know, hold down a demanding job at a competitive employer.

Of all the companies that our applicants had on their resumes, I classified the following as elite: Amazon, Apple, Evernote, Facebook, Google, LinkedIn, Microsoft, Oracle, any Y Combinator startup, Yelp, and Zynga.

Undergraduate GPA

After I ran the numbers to try to figure out whether GPA mattered, the outcome was a bit surprising: GPA appeared to not matter at all. Take a look at the GPA distribution for candidates who got offers versus candidates that didn’t (click to get a bigger, more interactive version).

As a caveat, it’s worth mentioning that roughly half of our applicants didn’t list their GPAs on their resumes, so not only is the data set smaller, but there are probably some biases at play. I did some experiments with filling in the missing data and separating out new grads, and I will discuss those results in a future post.

Is it easy to tell what the candidate actually did?

Take a look at this role description:

good_description

Now take a look at this one:

bad_description

In which of these is it easier to tell what the candidate did? I would argue that the first snippet is infinitely more clear than the second. In the first, you get a very clear idea of what the product is, what the candidate’s contribution was in the context of the product, and why that contribution matters. In the second, the candidate is using some standard industry lingo as a crutch — what he said could easily be applied to pretty much any software engineering position.

Judging each resume along these lines certainly wasn’t an exact science, and not every example was as cut-and-dry as the one above. Moreover, while I did my best to avoid confirmation bias while deciding whether I could tell what someone did, I’m sure that the system wasn’t perfect. All this said, however, I do find this result quite encouraging. People who are passionate about and good at what they do tend to also be pretty good at cutting to the chase. I remember the feeling of having to write my resume when I was looking for my first coding job, and I distinctly remember how easily words flowed when I was excited about a project versus when I knew inside that whatever I had been working on was some bullshit crap. In the latter case is when words like “software development life cycle” and a bunch of acronyms reared their ugly heads… a pitiful attempt to divert the reader from lack of substance by waving a bunch of impressive sounding terms in his face.

This impression is further confirmed by a word cloud generated from candidate resumes that received an offer versus those that didn’t. For these clouds, I took words that appeared very frequently in one data set relative to how often they appeared in the other one.

wordcloud_offer
Offer
wordcloud_nooffer
No offer

As you can see, “good” resumes focused much more on action words/doing stuff (“manage”, “ship”, “team”, “create”, and so on) versus “bad” resumes which, in turn, focused much more on details/technologies used/techniques.

Highest degree earned

Though highest degree earned didn’t appear to be significant in this particular data set, there was a definite trend that caught my attention. Take a look at the graph of offers extended as a function of degree.

As you can see, the higher the degree, the lower the offer rate. I’m confident that with the advent of more data (especially more people without degrees and with master’s degrees), this relationship will become more clear. I believe that self-motivated college dropouts are some of the best candidates around because going out of your way to learn new things on your own time, in a non-deterministic way, while juggling the rest of your life is, in some ways, much more impressive than just doing homework for 4 years. I’ve already ranted quite a bit about how worthless I find most MS degrees to be, so I won’t belabor the point here.1

BS in Computer Science from a top school

But wait, you say, even if highest degree earned doesn’t matter, not all BS degrees are created equal! And, having a BS in Computer Science from a top school must be important because it’s in every fucking job ad I’ve ever seen!

And to you I say, Tough shit, buddy. Then I feel a bit uncomfortable using such strong language, in light of the fact that n ~= 300. However, roughly half of the candidates (122, to be exact) in the data set were sporting some fancy pieces of paper. And yet, our hire rate was not too different among people who had said fancy pieces of paper and those that didn’t. In fact, in 2012, half of the offers we made at TrialPay were to people without a BS in CS from a top school. This doesn’t mean that every dropout or student from a 3rd rate school is an unsung genius — there were plenty that I cut before interviewing because they hadn’t done anything to offset their lack of pedigree. However, I do hope that this finding gives you a bit of pause before taking the importance of a degree in CS from a top school at face value.

pedigree

In a nutshell, when you see someone who doesn’t have a pedigree but looks really smart (has no errors/typos, very clearly explains what they worked on, shows passion, and so forth), do yourself a favor and interview them.

Personal projects

Of late, it’s become accepted that one should have some kind of side projects in addition to whatever it is you’re doing at work, and this advice becomes especially important for people who don’t have a nice pedigree on paper. Sounds reasonable, right? Here’s what ends up happening. To game the system, applicants start linking to virtually empty GitHub accounts that are full of forked repos where they, at best, fixed some silly whitespace issue. In other words, it’s like 10,000 forks when all you need is a glimmer of original thought.

Yay forks.

Outside of that, there’s the fact that not all side projects are created equal. I can find some silly tutorial for some flashy UI thing, copy the code from it verbatim, swap in something that makes it a bit personal, and then call that a side project on my resume. Or I can create a new, actually useful JavaScript framework. Or I can spend a year bootstrapping a startup in my off hours and get it up to tens of thousands of users. Or I can arbitrarily call myself CTO of something I spaghetti-coded in a weekend with a friend.

Telling the difference between these kinds of projects is somewhat time-consuming for someone with a technical background and almost impossible for someone who’s never coded before. Therefore, while awesome side projects are a HUGE indicator of competence, if the people reading resumes can’t (either because of lack of domain-specific knowledge or because of time considerations) tell the difference between awesome and underwhelming, the signal gets lost in the noise.

Conclusion

When I started this project, it was my hope that I’d be able to debunk some myths about hiring or at least start a conversation that would make people think twice before taking folk wisdom as gospel. I also hoped that I’d be able to help non-technical HR people get better at filtering resumes so that fewer smart people would fall through the cracks. Some of my findings were quite encouraging in this regard because things like typos/grammatical errors, clarity of explanation, and whether someone worked at an elite company are all attributes that a non-technical person can parse. I was also especially encouraged by undergraduate pedigree not necessarily being a signal of success. At the end of the day, spotting top talent is extremely hard, and much more work is needed. I’m optimistic, however. As more data becomes available and more companies embrace the spirit of transparency, proxies for aptitude that don’t stand up under scrutiny will be eliminated, better criteria will take their place, and smart, driven people will have more opportunities to do awesome things with their careers than ever before.

Acknowledgements

A huge thank you to:

  • TrialPay, for letting me play with their data and for supporting my ideas, no matter how silly they sounded.
  • Statwing, for making statistical analysis civilized and for saving me from the horrors of R (or worse, Excel).
  • Everyone who suggested features, helped annotate resumes, or proofread this monstrosity.

EDIT: See Hacker News for some good discussion.

1It is worth mentioning that my statement about MS degrees potentially being a predictor of poor interview performance does not contradict this data — when factoring in other roles I interviewed for, especially more senior ones like Director of Engineering, the (negative) relationship is much stronger.

Looking for a job yourself? Work with a recruiter who’s a former engineer and can actually understand what you’re looking for. Drop me a line at aline@alinelerner.com.

Building an engineering internship program

ClickTime is a small 20 person, privately-held company that works on SaaS time and expense tracking. It’s not a startup, there are no high profile investors, and revenue growth, while consistent, is certainly not explosive. Intern salaries are significantly below what Facebook, Google, and other elite giants pay. There’s no fancy swag, no relo, no housing, no free food.

And yet, ClickTime’s internship program has consistently been able to land amazing students. A significant portion stayed on for multiple summers and converted to full-time. Others went on to work at Google, Microsoft, and Apple. One student even went on to cofound a YC company.

So, how did ClickTime do it?

First, let’s talk a bit about what interns care about. In no particular order, the list looks something like:

  1. Street cred/social proof/working at a company that looks nice and fancy on your resume
  2. Money/help with logistics (relo, housing)/perks
  3. Impact
  4. Getting to work on cool stuff/getting code into production
  5. Good mentorship and learning new things

I worked at ClickTime for over 4 years as an engineer and was very heavily involved in intern hiring and recruiting, so I hope I can share some insights into how we addressed these things.

In our case, we couldn’t do much with the first two items[1][2]. Instead, we focused on the rest of the stuff and became really, really awesome at it. Even if we couldn’t leverage a huge brand or pay people top dollar/stuff interns’ faces with free food, we could give them a great work culture, meaningful projects, and a sense of having walked out a better engineer. And that’s what people remember.

Here’s what we did.

Engineers bought in to how cool it was to have interns.

This buy-in probably mattered more than any other single factor. Before you do anything else, make sure that you have buy-in from engineering, or your program will be dead in the water. And I’m not talking about perfunctory buy-in where people pay lip service to how cool having interns would be. I’m talking about at least one influential engineer being really, really fucking hyped about the internship program. Why does this matter so much? The thing is, you can sell the level of impact interns have til you’re blue in the face, but until someone actually comes up with what projects they’re going to work on, who they’re going to be working alongside, and how much access they’ll have to higher-ups, it doesn’t matter. At the end of the day, you really need an engineer to be like, “Oh hey, here are some hack day projects we started working on but didn’t have time to finish. Perhaps an intern could work on some of these.” You need people who are excited about the program to be the ones out recruiting and doing internship interviews. Because their excitement is going to be contagious. Few things are more compelling than organic enthusiasm. The worst case scenario is managing to get top talent and then making them fix a bunch of obscure, all-over-the-place, low-priority bugs that won’t see the light of day until 5 releases from now.

The internship program itself was really well-designed.

Every summer, we’d bring in a marketing intern, a product management intern, and up to 3 engineering interns.  Over the course of the summer, all the interns would work on an intern project together, effectively forming a mini-company within ClickTime proper. After familiarizing themselves with the product, they’d find ways to make it better or think up brand new features that it was missing.  We’d also pitch some ideas to them, and ultimately, they’d pick something they were the most excited to build. The marketing intern would work on ways to engage users (and how to measure user engagement), ways to publicize the feature, and so on. The PM intern would come up with a spec. The engineering intern(s) would build it. By the end of the summer, the students invariably had a fully functional prototype, and many intern projects have ended up in production with only minor tweaks/additions (stuff like scalability, hooking the feature into the rest of the application, etc).  Past examples of intern projects included a way to photograph and upload expense receipts, a mobile app for expense tracking, an incentive module for employees to submit their time, and a way to easily export time and expense data in QuickBooks.

In addition to working on the intern project, engineering interns would be a bona fide addition to the ClickTime development team. They’d get tickets to work on, and their time and contributions would be budgeted into release planning. Historically, in addition to working on tickets and the intern project, our engineering interns would find some part of ClickTime they thought could be better and improve it. We did our best to foster an environment that was receptive to new ideas, and we’d support the interns in building stuff they thought up. Because we tended to hire very smart and driven students, at the end of summer, we’d be left with amazing work that we never tasked them with or saw coming.

Interns had dedicated mentors.

During the summer, interns would regularly meet with the dev lead (PM and marketing interns would meet with analogous leads) and with the CEO, who would regularly provide product-level feedback on iterations of the intern project. In addition, interns had an engineer who was their dedicated mentor. Sometimes being a mentor meant doing code reviews. Sometimes it meant acting as a sounding board. Sometimes it meant answering questions. And sometimes it meant just checking in and saying hey. One of the biggest complaints I heard during internship interviews when I asked students what they didn’t like about their past experiences was feeling like they got left alone all summer. Some students are super proactive, but many are shy and are afraid of looking stupid by asking a lot of questions. You want a go-to person whose job, in the eyes of the intern, is to work with them. To ensure that the mentorship aspect runs smoothly, a trick you can do is to assign more work to a mentoring engineer than he could complete on his own — this kind of thing encourages organic delegation and create a real dependence on the intern. Outside of the direct benefit to the intern, having engineers taking on interns is a good way for more junior people to get a taste of what it’s like to have minions (or, more politically correctly, start learning how to manage and delegate).

The CEO was out recruiting interns.

ClickTime’s CEO is a Cal alum, and he’d attend Cal engineering career fairs and stand there for hours to talk with students. I remember when I went to my first career fair with ClickTime, and I made a point to eavesdrop on the conversations he was having to see if I could pick up some pointers. It was actually quite remarkable. Alex made every student he spoke with feel like the most interesting man (or woman) in the world. At career fairs, when you have students lining up outside your booth, you start to feel the pressure to get through them quickly and keep the line moving, but Alex took what seemed like a long time to find out exactly what each student was interested in. Sometimes, students didn’t have an answer ready, and he would then either tease it out of them, or failing that, give them advice about how to figure out what they wanted to do. It was really quite amazing. Students would regularly walk away with different body language than they had approached the booth; many of them looked a few inches taller.

Everyone selling the program was able to talk about it in depth.

When recruiting, everyone representing the company could give specific project examples, describe to students exactly how their summer would look, and share anecdotes about previous summers.

We were open to hiring underclassmenif they were remarkable.

This is especially handy if you don’t have a huge brand to leverage. Sure, the conversion rate drops off a bit if you hire underclassmen instead of juniors, but if they have a great summer, they’ll likely come back, and at the very least, they’ll tell their friends. A huge reason why ClickTime’s internship program was able to land top talent was because of the reputation it established on the Cal campus over the years.

[1] Having a great engineering brand is probably the most important thing you can do to attract top talent (both interns & f/t people), but that doesn’t happen overnight and is out of the scope of this question.

[2] With respect to money and perks, we decided that because we couldn’t afford to pay for housing or relo, we would focus primarily on local candidates. If you do have some means, though, I’d say that personalized, thoughtful gestures matter a lot more than sweeping perks. If you’re not close to public transportation, you can buy your interns bikes. Or if you don’t have the means to secure intern housing, you can have someone in the company take on and own the task of helping your interns find summer sublets.

Note: This post was adapted from an answer I wrote on Quora.

How different is a B.S. in Computer Science from an M.S. when it comes to recruiting?

Note: This post can now be found in Forbes!

Recently, someone on Quora asked what the difference was between a BS and an MS in computer science from a recruiting perspective. My answer ended up with > 200 upvotes, so I’m reproducing it below:

In my experience, an MS degree has been one of the strongest indicators of poor technical interview performance.

There are multiple confounding factors at play here. In my analysis, I didn’t bother to separate out an MS degree that comes out of a combined BS/MS program (e.g. MIT’s prestigious MEng) from an MS degree from Bumfuckshitsville, Indiana. I also didn’t separate an MS in CS from an MS in some other, related field (e.g. “Information Systems Management”, more on that below). With perhaps the exception of the combined BS/MS programs from top schools (which tend to have stringent undergrad performance criteria), though, I’d be surprised if separating this stuff out made too much of a difference. Whereas MS degrees used to be a means for departments to begin vetting future PhD students, I believe that the purpose has, in some ways, shifted to be a cash cow for the university in question. Stanford’s admissions bar for an MS CS degree is significantly lower than their undergrad admissions bar. Schools like CMU do things like open a satellite Silicon Valley campus where they offer an MS in “Information Systems Management” (which to naive hiring managers sounds impressive until they interview a few people with this degree) and rake in all the moneys.

Part of the problem is that CS fundamentals instruction tends to happen in undergrad computer science courses. If your undergrad degree was in some other field, you can get through an MS in CS without ever taking an algorithms or data structures class. Or you could take a graduate-level algorithms class where the grading curve is going to be different because a good portion of your classmates have never done any programming either. A lot of MS courses I’ve seen on people’s resumes seem to involve taking some machine learning toolkit and using it out-of-the-box on some data set. Is this interesting? It can be. Does it show that you know how to make hard, unexpected design decisions or write code that’s not brittle? Probably not.

The other thing is, if you already have an undergrad CS degree, employers may wonder why you chose to go back to school rather than working (unless you’re going into academia, which is an entirely different animal and out of scope here). A possible line of reasoning might be, “Hey, if this person were really good, he’d be working already instead of waiting out the recession or whatever.”

So, let’s say that you weren’t in a position to get into a top undergraduate program. One tempting option is to try to get an MS from a top computer science school to legitimize yourself on paper. If you actually are passionate about programming, I would urge you not to do that[1]. Although you will look more legit on paper, many startups are catching on to how useless an MS degree can be. Until the market catches up, an MS will probably get you an extra $10K in your base salary. However, keep in mind that you’ve just taken 2 years out of your life and paid some amount of money (how much you pay depends on whether you can TA while studying). Instead of going back to school, take a few months to build something really cool. Teach yourself things. Take Udacity and Coursera classes and potentially use their career placement services. Work with a recruiter whose recommendation will help you get a first round interview even if you don’t have a pedigree. Once you start working, raises you get will quickly make the initial 10K boost pretty insignificant (especially given the opportunity cost and potential tuition expenses).

So, to answer the question: BS >>> MS (if they’re from the same school, assuming you have to choose one or the other)

[1] There are exceptions. If you don’t really love programming but want a coding gig for the stability/income, this could be a good option. Also, if you’re a foreign student looking to turn your MS into a work visa, this could be a good option as well (though you’ll want to distinguish yourself somehow from all the other foreign students trying to do the same thing).

Note: This post was adapted from an answer I wrote on Quora.

What computer science knowledge/concepts do software engineers use on a daily basis?

For as long as I can remember, there’s been a disconnect between kinds of stuff people ask in software engineering interviews and the day-to-day work that software engineers do. As a recruiter and a former engineer, this disconnect is particularly salient because when I evaluate candidates I have to ask myself two separate questions:

  • Will this person be able to pass a few rounds of coding interviews?
  • Is this person actually good?

It’s a bit funny that whether the person is good is somewhat immaterial if he isn’t going to make it past the CS fundamentals gauntlet.  That said, I think that the standard interview process isn’t the worst predictor of success on the job — people who are good at solving toy problems, have solid knowledge of various data structures, and for whom evaluating code efficiency is second nature also tend to be good at being engineers.  This is perhaps a classic correlation vs. causation problem — people who are good at being professional coders also tend to be good at being interviewed.  However, I’m not sure that they are good coders entirely because they are good interviewees.  In other words, I would guess that this approach has a pretty high false negative rate.  Unfortunately, I don’t have a good solution to this problem, and surely  lots of false negatives is better for a company than the same number of false positives (especially if the company has a good brand among engineers that affords it a revolving door of applicants).

Recently, someone on Quora asked about what CS concepts are actually used on the job (as opposed to what candidates are expected to bring to the table).  Bulat Bochkariov did an amazing job at attacking both this question and discussing coding interviews.  With his permission, I’ve reproduced his answer here.

Answer by Bulat Bochkariov:

The set of questions people ask is frustratingly disjoint from what’s important day-to-day. There are reasons, but I’ll try not to get sidetracked here.
Here’s a tiny list of things an engineer should be prepared for.

  • It’s 2pm. The release gets cut at noon tomorrow. (Or we could delay, but that would suck kind of a lot.) This piece of code is running way too slow for us to ship it with our names on it. How would you decide what to do? And then how would you do it? Remember that there’s still that other feature we were working on.
  • We need to do X. You know our stack and our toolchain. Research the options, then determine the next step and take it.
  • MySQL is seriously melting with this growth. We can’t keep doing all these joins.
  • There’s a critical bug in production.
  • We’re losing velocity. Our crufty code is keeping us from making changes quickly. What’s the root of the problem? What would really good abstractions look like? What are we trying to do? And how do we keep doing it while cleaning up this mess?
  • EventMachine is leaking more than normal.
  • Someone took out Git. There’s a crater.
  • Is it ok not to encrypt this? Like, what’s the worst that could happen. Right?
  • I think we’ve been hacked.
  • We’ve got an interview today. They even mentioned some security experience. What do we ask? How? What will it tell us?

For the most part, no one cares if you can make a red-black tree. It’s not something you’ll be doing. It’s just a proxy for a few things that do matter: your ability to solve new problems and an understanding of the content in a core CS curriculum—or whether you can write code that compiles and runs. But those are just the basics. They’re necessary, but they’re not sufficient.

What’s more important is your understanding: whether you can see the forest for the trees and find your way to where you’re going. You don’t think in JavaScript or Python but in problems and solutions, on multiple dimensions at once. You know the details are important but they aren’t the point—except when they are, and you can tell the difference.

You don’t do stuff that’s clearly bad; consider that a given. You don’t leave O(n^2) code turds on the floor or weird recursion time bombs that will blow the stack if n gets bigger than a thousand. But in a Maslow pyramid for programming, that stuff is on the bottom steps. It’s important to keep reaching for the top.

The most important question you can ask is “why.” What problem does this project solve, and what problems is it made of? Why should I consider Factor A over Consequence B? Why the hell is there a ServletBeanAdapter here? And keep in mind that sometimes it’s because an engineer took time to fully understand the problem, examined the upside, noted the risk to his or her own timetable, considered the security concerns, thought about maintainability, sketched out some more powerful solutions, decided not to waste the effort here, and made a quick decision that looked right when faced with finite time and lots of ambiguity. That’s what software engineering is about. You develop judgment and you learn from your mistakes.

And that’s what no one asks about.


Here’s where I should say a couple things. First: I’m a “new grad,” which means I’ve interviewed enough that I can’t count them on my fingers and toes, but I wouldn’t need to grow a lot of extra hands. That leaves me biased in a number of ways. Second: The context is important. If you work on machine learning, you’ll definitely need some statistics. For distributed systems, you’ll need data structures and concurrency. I don’t mean to say that core CS is not important. I’m talking here in generalities, so please don’t take them for specifics.


In my experience, what people ask is more like this.

  • FizzBuzz variants; easy looping questions. (Because it would be rude to check your pulse.)
  • Counting nodes or levels in some sort of tree. The goal is usually to see if you can think recursively or how you’ll try to hack the problem if you don’t. I love to harp on these, but honestly they make pretty good screens.
  • Basic questions from an algorithms class. Can you solve a searching problem if you aren’t told how? Can you mentally connect the properties of BFS with level-order printing in a binary tree?
  • Data structure questions with a fairly clear right answer. Given some data and a time or space constraint, can you process it a certain way? Can you make a stack with constant-time maxValue()? What structures would you hook together for an LRU cache? A lot of these are just fill-in-the-blank if you know the standard structures’ time and space costs, but that’s already showing that you know something worth knowing.
  • Sometimes they’re just random made-up problems that you probably won’t guess in advance. Maybe some weird string questions or a twist on a dynamic programming problem. Some companies might ask more mathy stuff like rolling averages or random number generation.

And of course there’s plenty more. There’s a bunch of different lists online that people study with. I just don’t think what’s in them is the point. The most important thing I have to share, strictly true or not, is that the skills you’ll need for interviews are not the skills you’ll need at work.

If you’re interviewing, brush up on your textbook knowledge. Practice your dynamic programming. Go code up a union-find in C. Worry less about what skills you really need and more about the ones you need to demonstrate. They’re different. The standard problems people test you on are proxies for the things they really want. They just have a funny and imperfect way of finding out.

View Answer on Quora