Monday, September 24, 2012

The Attrition Funnel



My post on the Recruitment Anti-Funnel touched on the idea that one of the worst things you can do for hiring is to lose good people. The downside of losing good people should be obvious, but let’s state it explicitly in terms of recruiting - when you lose someone good, you suffer multiple losses for your recruiting efforts:

  • you’re short one person (obviously) - however many people you were trying to hire before, now you need one more. Depending on how your funnel converts, this could require adding a few hundred more people to the top of the funnel.
  • you risk having someone out there in the market telling your potential recruits that your company is not a good place to work
  • more likely, even if that person isn’t telling people that your company is not a good place to work, LinkedIn makes it clear that they left; in the absence of information about why someone left, outsiders will assume the worst, taking any attrition as a sign that your company is not a good place to work.


Of course, this ignores all the other negative effects to your team and your company that come with losing good people, this is just how losing good people makes recruiting harder for you.

As with bad recruitment practices, I have spent enough time with startups over the past few years to see a number of common bad habits that lead to attrition among engineers - let’s call this your Attrition Funnel, the gradual sequence of steps you take to move people from “great employee” to “former employee”.

You’re managing down to people. Engineers in today’s market are less like traditional employees and more like free-agent entrepreneurs; each day they are making a conscious choice about whether to keep working at your startup, whether to return the phone call from the recruiter who has been hounding them, whether to IM a friend at a cooler startup to line up a new job, or whether to just quit and finally figure out doing their own thing.

An engineer in this mindset isn’t focused solely on the technical challenges that are directly required of them in their role - they are constantly evaluating every aspect of the company to assess whether the company is headed in the right direction. They are reviewing every metric they can get their hands on (which, by virtue of their access to internal datastores, is much more than someone non-technical might expect). They are asking questions about marketing strategy, product priorities, and sales compensation schemes.

In that environment, I am always flat-out gobstopped when a CEO asks me, “How do I get engineers to just stay focused on the technology problems instead of the always poking around the rest of the company?” In the pathological case, this comes out more like “No, I am not going to explain the company strategy to the engineering team, their job is to write more code and let me worry about the strategy!” 

For an executive dealing with engineers, this phenomenon is perhaps exacerbated because it is a behavior somewhat unique to engineers. I’ve rarely seen an individual contributor salesperson ask a CEO challenging questions about technology strategy, for example. This may be a matter of engineers flexing their market power (eg, “I need to know this to assess whether I can be at a company with better prospects”), or it could be a matter of a different perspective on the part of engineers, I can’t say. For executives in non-technical parts of the company, the glaring difference in behavior between engineers and non-engineers can lead to the conclusion that somehow the engineers at your company are just exceptionally difficult to deal with and intent on learning about areas you think they shouldn’t be concerned with. Trust me, engineers at every startup want answers from the executive team about every aspect of the business.

In any event, leaders in startups need to be prepared to handle the challenging questions that come from their engineers, regardless of the topic. Failure to address the questions from your engineers will make them feel like you are treating them with less respect and transparency than they are entitled to; this is a shortcut to stripping them of a sense of empowerment, and makes it very easy for another startup to recruit them away with promises of real impact and access to the executive team.

You’re churning on strategy. It’s become very trendy in the past twenty-four months or so to pivot, to make a radical change in strategy that redefines a company. Pivoting shows that you’re lean! You’re always learning! You’re nimble! Still, there are a lot of ways a pivot can work against you in terms of attrition. Sometimes, a pivot is really just flailing, as Steve Blank describes well here

I don’t want to get into the debate about how to avoid bad pivots, so, for sake of this discussion, let’s assume that every pivot you’re considering is a stroke of strategic genius that will set your company up to double growth rates, triple revenue, and give everyone a unicorn.

Even in the case of the perfect pivot, a strategy change can have a big impact on employee morale and hence, attrition. A large part of the decision to join startup, much more so than a big company, is subscribing to a vision of where that startup is heading and how it is going to change the world. When the strategy changes, employees can be left feeling like they were misled or, in worst case, lied to, about the prospects of the original vision.

In some sense, this issue is a concrete case of managing down to your engineering team - they want the opportunity to evaluate a new strategy, understand all the factors that went into it, give their input, and decide whether they are signed up for this new vision. In the best case, the CEO or their designee should take the time to meet with the engineering team as a group or individually and really sell them on the pivot. The pathological case I have witnessed too many times is the CEO who says, “This is the new strategy I’ve figured out, this is what we’re doing. Anyone who isn’t ready to sign up for this right now is just not a team player.”

You’re not paying market rates. This is the easiest mistake to avoid and the one that pains me the most to see. I’m on the record that I think engineering compensation has gotten out of control, but for the foreseeable future, at least, as you’re budgeting for your startup, you have to plan for the fact that engineers are expensive. If your financial plan depends on people being willing to work for you at a salary 20-30% below what they can make at a similar-stage startup, you are going to be in trouble.

I intentionally did not highlight this issue in the post on recruiting, because I have found that many great engineers are willing to take a below-market pay rate when joining a new startup, even as compared to similar startups. As such, paying below market doesn’t always hurt your recruiting efforts. This is a good thing for everyone involved - you hire new engineers, give them big equity packages, and incent them to make their equity worth gobs and gobs of money. In my experience, this situation can usually last for 12-18 months without issue. After that, if there’s no market confirmation that the equity is increasing in value, people naturally want to see an increase in their salary. If you’re not ready to bring salaries up to market rates, you can count on people starting to look around.

Figuring out how to assess the appropriate market rate can be a challenge, as good data on comparable startups is generally difficult to find. The recently-released Wealthtfront tool is the most credible and useful way I’ve seen to get a quick gauge for what you should expect to pay people.

Even the Wealthfront data is pretty broad though, making it tough to assess how your compensation compares to your immediate peers. The most pragmatic way I know to assess where you stand relative to the rest of the market is to collect data through the hiring process. When negotiating offers with engineers, many people are willing to disclose the details of their competing offers. Every competing offer you can learn about is a concrete piece of evidence showing how your offer compares to your peer companies for the same employee. Over time, enough data can help paint a pretty clear picture of the comp structure at your peer companies.

Of course, there are innumerable other ways to build a strong Attrition Funnel. The spirit here is not to be exhaustive, but these are the mistakes I have seen startup leaders making over and over again that are most easily avoided and most likely to build a strong path of good engineers walking out the door for better opportunities.

Tuesday, September 18, 2012

The Engineering Recruitment Anti-Funnel

I learned yesterday that the referral bonuses for engineers at some NYC startups are starting to get huge - one prominent NYC startup is now offering its employees $10,000 for each successful referral. This is a classic “throw money at the problem” solution to attracting engineers, and, while economic incentives certainly do impact behavior, I doubt it’s going to help enough.

Hiring engineers is hard, hiring great engineers is harder, and hiring great engineers at scale is impossible if you insist on doing it wrong. Still, I don’t want to write Yet Another Blog Post On Hiring Engineers, so today let’s talk about what I’m going to call the Recruitment Anti-Funnel.

So much of hiring these days is focused on the funnel - increasing a referral bonus, for example, is meant to increase the number of applicants at the top of the funnel, using external recruiters is meant to be a way to get a higher-quality funnel, faster turnaround time from interview to offer is meant to reduce leakage from the funnel, etc. The thing that I see these days is that so many startups are doing so many things outside of the funnel that just broadcast the message “Good engineers should not work here,” and I don’t think they have any clue that they’re doing it.

In no particular order, here are the Anti-Funnel patterns I see over and over again.

You’re churning through the good engineers you already have. This is an oft-overlooked factor, but many big startups in NYC have burned through a number of great engineers, people who have moved for any number of reasons, usually along the lines of unhappiness with the company, the product, or the team. Every time you lose a good engineer, you’re putting someone out into the marketplace who is willing to tell other engineers to avoid your company. This message transmits over drinks, at meetups, in IM conversations - “Oh, you’re talking to X? Ryan worked at X for 8 months and said they have no clue, I’d stay away from there.” There really is a guild of software (and it exists outside the valley!), and the members will warn each other off of joining a company that’s burning through good people.

Really great technology startups fight tooth and nail to keep their good people. If a good person is unhappy, that is more likely a problem with the company than a problem with the person.

You’ve hired an engineering team of “band-aids”. Many non-technical founders, stuck between the rock of “can’t find any good engineers” and the hard place of “have to ship product to keep the board happy”, take the cheap way out and fill seats with any engineer who can string together three lines of PHP. You can convince yourself that this is just a band-aid, something to get you through until you can start to find some stronger people.

Hiring for band-aids is like choosing the Dark Side of the Force: it’s quicker, easier, more seductive, and once you start down that path, forever will it dominate your destiny. It’s a well-known startup adage that B players hire C players, but we less often consider the inverse; I’ve never met an A player or even a B player who wants to join a team of C players. Your band-aid hires become the strongest factor in discouraging strong people from joining your team.

Great talent attracts great talent; bad talent repels great talent.

You’re broadcasting your complete lack of understanding for how hard it is to build software. I am constantly meeting with entrepreneurs who show me a development roadmap that should take two years to build, then tell me that if they can hire two or three great people it should all be done in six months. No one pushes as hard as I do to get more things done more quickly, but the practical reality is that software is hard, and until you’ve been through the cycle of building big systems, you really can’t appreciate how hard it is.

This pattern is particularly poignant in early stage startups, when the founders are not technical and don’t have good help in hiring for engineers. When you demonstrate to engineers that you have unrealistically ambitious goals for what they should be able to do, you’re advertising  to them that coming to work for you means endless nights and weekends trying to live up to ludicrous schedules.

Like I said, there’s no good way to learn how hard it is to build software until you’ve built software. If you want to at least get a feel for it, I highly recommend the book Dreaming in Code - it’s a great non-technical account of how Mitch Kapor, the creator of Lotus 1-2-3, set out to build a new open source software product with a stellar team and tremendous backing, and, for all intents and purposes, failed.

If you don’t understand anything about building software, you can’t attract people who are going to build software.

You have shitty office space. This Anti-Funnel pattern is certainly not as lofty than the others, but it’s pragmatic and it’s real. Working conditions matter a lot to good people, and when they come in to interview, they are evaluating your space as somewhere that they will spend 9 to 12 hours a day. No one is saying you have to have offices like Google, but many great startups have great office space. It is expensive, and, if you’re looking at a straight P&L, it looks like a waste of money, but it is a lot cheaper than $10,000 referral bonuses.

Good office space is expensive when viewed as an administrative cost, it’s super cheap when viewed as a recruiting cost.

Now of course, to build a great engineering team, you need to optimize your hiring funnel - you’ll get no argument from me about that. My point here is that you could have a perfect hiring funnel, but if you’re following these Anti-Funnel habits, the biggest referral bonuses in the world won’t be enough to help you build the team you need.

Monday, August 27, 2012

Leadership Mechanics: Handling Challenges From Your Team

If you've been an undergraduate at Carnegie Mellon any time in the last twenty or thirty years, you've been impacted by Michael Murphy. You might not have known about Michael at the time, but he has held a variety of roles in Student Affairs in that time, so to some extent, much of what happened to you outside of a classroom was under Michael's influence.

When I was at CMU in the mid-to-late 90's, Michael was Dean of Student Affairs, a vast role that included responsibilities for things like housing, dining, and student activities. I could say a lot about great leadership traits I saw Michael exhibit at that time, but one of the things that stood out to me was the way he'd handle students challenging him about various choices made by the administration. I remember one instance as clearly as if it were yesterday.

As a bit of background, at the time, the campus had two dining venues that accepted our meal plan, and they were both awful. Just terrible. For most of my second semester freshman year, dinner each night was cheese fries, because they were so hard to get wrong.

Anyhow, at some point Michael was doing an open Q&A with students, and someone asked, "Why can't we have a McDonald's or a Taco Bell on campus?"

Michael's response was not so much an answer as it was a full treatise defining the pros and cons of having a franchise fast food restaurant on campus. He acknowledged the appeal of having something familiar and consistent, he conceded that it was an option that had been raised and considered on multiple prior occasions, and then proceeded to recite a set of reasons why the idea had been rejected in the past. Fast food menus don't vary, at all, meaning it is easy to get bored very quickly with what's available. Large corporations don't have much flexibility in how an individual franchise works, so the school would have little influence on the operation. It's not clear that the economics of having a franchise on campus would appeal to a major fast food corporation. Putting such a franchise on campus could draw in people from outside the campus community, which is not certain to be the right choice.

Michael was under no obligation to give a thorough explanation of the reasoning behind the choice not to have a fast food franchise on campus. Given his position, he could have simply said, "We've looked at that idea, it wouldn't work, we're not going to do it." I think though, that his goals as a leader were better served by going through the detailed explanation. In particular:
  • He showed the entire audience that he respected their input and responded to it thoughtfully. When you think about it, a group of entitled undergraduates probably didn't deserve that level of respect from a high-level university administrator, and yet he showed the respect anyway. 
  • He demonstrated that he had a command of all aspects of the issue. There's tremendous power in simply demonstrating that, as the leader responsible for such choices, he had given it far more thought than his original questioner had. 
  • He worked to persuade the audience that the solution they had in mind wasn't as simple as they thought. Whether or not Michael convinced anyone about the right choice to make, he made sure everyone walked away understanding that the choice was not black and white. I think this was particularly valuable because Michael treated the audience in a manner appropriate to their intelligence - this wasn't a matter of communicating to them a decision that had already been set in stone, as much as it was a matter of bringing the audience into a nuanced dialog with no clear best solution.
What leader hasn't had the experience of being challenged, particularly in public, by someone on their team who wants to see things done differently, or who is questioning a decision, or just wants to sound off about something that's annoying them? The anti-pattern that happens all too often is for a leader to shoot off something like "It's more complicated than you understand, but what we're doing is the right way," or, even worse, when the leader responds by directly attacking the questioner - a response that usually comes from a place of feeling disrespected.

Here's why this really matters: the leader who shuts down challenges from their team leads that team to abdicate all responsibility for change - what member of a team tries to push the group in a new direction if their input isn't respected by the leader? The ultimate outcome is a team where the leader is the only one ever creating change, because the rest of the team has seen that their own attempts are never treated fairly.

As a leader, it takes a tremendous amount of self-confidence to hear a challenging question, listen to the intent of the person asking it, and respond in a spirit of mutual respect and productive discussion. At the core, I have found that leading in this kind of situation requires the humility to accept that sometime someone will make such a challenge and I will have to say, "There are plusses and minuses to what you're saying, but yes, on the whole I think you have a better way, let's figure out a way to make that happen."

Those experiences have always created positive outcomes for me on teams I have led, which is why I think careful and considerate response to challenges from your team is a critical one of our Leadership Mechanics.

Wednesday, August 22, 2012

Leadership Mechanics: Staying Above the Fray

In college, I was in a fraternity, but not just any fraternity - I was part of a group of incredible men who founded a new fraternity on campus. I don't really know what it's like to be a fraternity that has been around for decades and has established protocols, but I know that in my fraternity, we spent most of the first two years of our existence by arguing about The Way Things Should Be Done.

What should the minimum GPA be? What about the minimum GPA for elected officers? What should we do about brothers hosting keg parties off campus? What about on-campus but not in the house? When do we hold elections? Can we spend money on paintball during rush? Should we be allowed to eat in meetings? I'm not kidding, these are the things we argued about.

The house generally divided into two groups, which, in the interests of fairness to all involved, I'll call the drunkards and the uptight assholes. For any given question, there were usually two takes on it, and it was always the same set of people arguing the same predictable positions.

We had a pretty open policy toward discussion in those meetings - if a brother had something to say, he would always get the chance to say it. This is why my Sunday evenings in those days were completely unproductive - house meeting would start at 8 and go until 11 quite frequently.

Among all the debating and arguing about the finer points of fraternity management, we had one very important brother, Jeff. Jeff was a basketball player and electrical engineering major, and usually a very quiet guy. When we were debating some of these "issues", Jeff usually sat silently and listened.

But after about six months, a very interesting pattern emerged. After endless debate on some minuscule topic, Jeff would slowly raise one giant basketball-player hand, never higher than his chest, and wait for his turn to speak. And whenever he did, he would calmly summarize both sides of the issue and propose a solution. But here's the really crazy thing: without fail, everyone would all hear it and say, "Yeah, he's right, let's do that."

It was uncanny. It got to the point where, if we were having a debate and Jeff raised his hand, someone in the room would say "Shut up guys, Jeff wants to say something". He was like our own personal Messiah of mediation.

To this day I can't figure out exactly how Jeff was able to create immediate consensus between two groups that had been yelling at each other moments earlier. But I can point to one key tactic he employed - Jeff never spoke up until he thought he could find a suitable middle ground, he just waited patiently. He stayed above the fray.

If he had spoken up earlier, he would be perceived as having "taken a side", and all further comments would be colored by that perception. Instead, it was as though he was the one person in the room who could speak directly to all stakeholders, showing them that there was a balanced approach suitable to everyone.

Staying above the fray is useful in any group dynamic, but it's critical when you're in the position of leadership - nothing can kill a productive debate more quickly than a leader indicating a preference for a particular outcome. In the common case, the leader's comments have the effect of silencing all counter-arguments, thereby ending the discussion before it can be fully explored.

In my own career, I work very hard to employ a similar tactic every chance I get. If two people in a meeting are arguing about some point, I try not to dive in right away and pick a side, but I wait until I think I've heard what each of them have to say, and then try to point out underlying motivations or shared interests that can build toward consensus.

As a leader, it's critical to me that I not be perceived as playing favorites or not letting everyone have a fair say in a debate. More often than not, a good group of people can come to a great answer without the intervention of a leader, but when it is required, that intervention comes best from someone who has obviously listened to all input and is focused not on a personal preference but the solution that is best for everyone - in other words, the person who has stayed above the fray.

Monday, August 06, 2012

Leadership Mechanics: Asking Difficult Questions


I was a Boy Scout as a teenager, and my first scoutmaster was an incredible man we all called Mr. P. He was one of my earliest leadership role models, so I want to start the discussion of Leadership Mechanics with him. Mr. P was a powerfully quiet man - one of the few people I know who possessed the ability to silence a room of 40 teenage boys simply by standing quietly and waiting.

In Boy Scouts, for each new rank you achieve, there are a number of requirements - earn this many badges, go on this many campouts, and so on. The final requirement for each badge was to go through a "scoutmaster conference" - sit down with your scoutmaster, talk with him for 30 minutes or so, and then he signs off for you to get your badge.

When I first started working on my boy scout ranks, I had a notion that the conference would be just a pro forma conversation, something along the lines of "tell me what you learned while earning this rank".

Mr. P's conferences, though, were nothing like my expectation. They rarely had anything to do with the requirements of the rank. In his calm, respectful demeanor, Mr. P would drill in on a line of questioning that was, more often than not, pretty challenging.

A quick aside: it should be no surprise that, as a teenager, I was a first-class nerd. My idea of a good Saturday afternoon was winning a math competition followed by writing programs to find prime numbers, and I lacked any awareness that other people didn't have fun the same way. That my parents allowed me to spend weekends in the woods with a hoard of teenage boys was either a sign of great trust in my ability to take care of myself or great foolishness. The bottom line was, I was a constant target for the other scouts - they called me names, filled my shoes with shaving cream, threw buckets of sand into my tent, all typical teenage stuff.

To this day, I can remember sitting with Mr. P. in the scoutmaster conference for my Second Class Scout rank, and him really exploring with me the topic of why I didn't get along with the other boys. It had nothing to do with the rank, or what I had learned, but it did have a big positive impact on my future in scouting.

What really stands out in my memory was that Mr. P. was very pleasant and even caring throughout these conferences, but he would also keep asking questions, the kind of questions that made me reconsider some of my basic beliefs, until I could give a thoughtful answer.

"What does it matter what names they call you?"

"Um, I don't want to be called those names..."

"What part do you play in triggering them to do these things?"

"What did I do??? I didn't do anything. I was just around. OK, maybe I might have made some jokes about how I was smarter than they were, but a joke is not even close to the same as what they did!!"

"Why do you even react to having my shoes filled shaving cream?"

"Why do I react? They filled my shoes with shaving cream - what else would I do?"

"Does that do anything to discourage them doing it again?" 

"Not really, I guess...."

"So why not choose just not to react?"

"How would you even do that??"

Throughout this conversation and many others like it, Mr. P. pushed, gently but firmly, to ensure that I was really learning and growing as a young man.

Here's why I think this is so powerful as a leadership technique - a great leader is someone who is pushing everyone on their team, kindly, firmly, and consistently, toward ever-better performance. Too often we think of performance feedback as falling in the polar opposites of "Everything's great, keep it up, chief!" and "Oh you really screwed that one up, you better not screw up again," but I think a really strong leader can say, "You're doing great, but I want you to explain this situation to me in more detail."

So this is what I want to call the first of the key tools in Leadership Mechanics: you need to be comfortable asking really difficult questions, and you need to hold people to giving you an answer.

I use this tactic every day with my teams; for example, when things go wrong, I want to make sure we learn something and improve. I don't need to cast blame or chastise people, but I do expect that the situation provides an opportunity for learning. If someone doesn't give an answer that holds water, I ask again. I collaborate on ideas for what we can learn, but I make sure the answer comes from the person who really needs to learn the answer.

You see a similar philosophy in the Toyota Total Quality Management approach of "Five Whys" - in which a problem is debugged by asking why it happened, and, for each reason provided, you ask why that happened, the idea being that by the time you get five levels deep, you've discovered the real root cause.

I developed a huge respect for Mr. P. during our scoutmaster conferences. His commitment to asking very challenging questions, and his expectations of getting genuine answers, pushed me to grow in ways I still appreciate.

As leaders, I think we can all grow our teams and earn their respect in the exact same way: challenging everyone around us to develop genuine thoughtful answers to difficult and uncomfortable questions.

Leadership Mechanics

I want to start a new series today on a topic that's been rattling around in my head for a few years now. I want to talk about leadership, but in a new way.

There are hundreds of leadership books from successful CEOs, and thousands more from coaches, trainers, and leadership theorists. These are great books - I started reading many of them at a pretty young age, and much of what I understand about great leadership comes from that study.

However, what these books rarely get into is the real working day-to-day mechanics of leadership at all levels of an organization.

A leadership book from a successful CEO usually betrays the luxury of being, in most cases, the final decision-making authority. A CEO's leadership can impact thousands of people, and it is, without a doubt, a very challenging leadership role, but the lessons learned in that context don't necessarily translate to many more common types of leadership roles.

The theorists and coaches provide great longitudinal review of what works in a variety of situations - working with leaders across many organizations and roles allows them to extrapolate nicely and find useful patterns. However, I find the coaches' perspective frequently to be hollow - you can tell that their narrative is a report from what they have seen, not a first-hand account of visceral, gut-wrenching experience.

And neither group tends to report on the very tactical aspects of leadership. For example, we can all say that listening is an important leadership skill, but what are the mental habits you build to ensure that you are listening to all those around you?

So today I want to start a series on what I'm going to call Leadership Mechanics - the nitty gritty, nuts and bolts details of what I have observed great leaders doing day in and day out, and what I have adopted in my own leadership. More than anything, my goal is to start some conversation about the key habits that great leaders use in their everyday work, digging beyond the platitudes that

For the ten of you that read this blog, I hope you enjoy it, I hope it expands your thinking a bit, and I look forward to hearing your thoughts.

Sunday, July 10, 2011

Not All Early Optimization is Premature

Andrew Parker and Jeff Atwood have both had great posts recently about performance as a feature, but I think they've each actually stopped short of a powerful point - improving a product experience by 10 or 20 percent through optimization is great, but there's incredible power when you unlock fundamentally different feature sets through radical optimization. This is an area that the code-first-then-optimize process misses entirely, because incremental improvements on the same basic design will never lead to order-of-magnitude performance improvements.

The power of such performance improvements is one of the most important lessons I learned at Google. A couple examples demonstrate the kind of optimization I'm thinking of:
  • In 2004, when the standard storage for webmail was 2MB, Google was able to launch Gmail with 1GB of storage, because GFS provided a means for managing disk that was orders of magnitude cheaper than what the other providers were using. Rumor has it that Yahoo went out and gave NetApp millions of dollars to buy storage devices in order to come anywhere close to what Google was offering. Underlying this all is an optimization of storage and disk that was deeply more efficient that what others in industry were capable of at the time.
  • One of the coolest features of Google maps is the ability to see a route, then grab it with your mouse and drag it to change the route. That feature is possible because Google developed a radically more efficient route-finding algorithm, years ahead of what anyone else in the market can offer. The difference between computing a route in 1 second and computing it in 10 milliseconds means you can suddenly offer users the ability to compute hundreds of times more routes.
It's part of the modern software engineering zeitgeist that "premature optimization is the root of all evil," but as I researched this post, I found out that the full Knuth quote is a lot more illuminating than just that snippet; the full statement attributed to Knuth is actually, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" (emphasis mine).

So yeah, we can all agree that "performance is a feature," but that fails to convey the power of high-performance systems. We can talk about caching database results or using a CDN for static content, and everyone should be doing those things, but let's not be afraid to go much, much deeper. Consider the core operations of your service, then imagine that you could speed them up by 100x - what radically new features would be enabled? With those radical new features in mind, start working backwards to figure out actually make those 100x improvements.

The big challenge is that these order-of-magnitude optimizations are design optimizations, not the kind of changes you can make after the fact. In design discussions, the engineer arguing for keeping the entire datastore in memory is immediately shouted down with the "premature optimization" line, but I think it's time we start fighting back on behalf of design-time optimization.

Sunday, April 10, 2011

Making Effective Use of Code Reviews

I’ve been reading chunks of Coders at Work this weekend, and the topic of code reviews has come up a few times. Code reviews are clearly a very useful development technique, but it can be tricky to apply them in a way that improves code quality without slowing productivity.

Poking around the web, I don’t see a lot of great writing about code reviews, so I wanted to share the guidelines we use at Yext for effective code reviews. The guidelines below are captured from an email I sent to the the engineering team almost a year ago, and I’m proud to say that our code reviews do a lot to improve code quality while keeping the team operating at peak efficiency.

These guidelines are heavily biased by my experience at Google. There, I saw how code reviews could identify and eliminate many preventable bugs, including many that the original developer never would have found. I also saw innumerable cases of reviewers who lost perspective on the larger goals of the team and the company, and thus acted to prevent progress on important projects as a result of matters of personal preference or sheer obstinance. My goal for Yext is that we capture the best aspects of code reviews while eliminating the worst.

With that in mind, my recommendation for code reviews is that they address the following points:
  • Correctness: Does the code do what it claims to do? Is the code correct in both the nominal case and the boundary cases? As a reviewer, this is your opportunity to point out edge conditions of which the original developer may not have been aware. An important special circumstance is when you may be aware of legacy systems or features that interact with the modified code in some non-obvious way.
  • Complexity: Does the code accomplish its task in a reasonably straightforward way? If you can point out simpler approaches that do not compromise the correctness or performance of the code, you should.
  • Consistency: Does the code achieve its basic goals in a way that is consistent with how similar code in the codebase achieves those goals? Is it re-using the available libraries and utility classes? Where possible, has code been refactored for re-use instead of just copying and pasting?
  • Maintainability: Could the code be extended by another developer on the team with a reasonable amount of effort? More than any item on the list, this is the karma investment you make by doing code reviews – the code you review today may be the code you have to update tomorrow, so taking the time to make sure it’s maintainable by others pays itself back to you.
  • Scalability: Will the code be performant at the expected volumes? It is important that this question always be asked in the context of expected volumes. When building a new product in an untested market, it is fine to write code that works for 100 users but not 10,000; if the product should be that successful, you can profile, optimize, and, when necessary, re-write the critical bits. The corollary is that you should not spend time optimizing code when the market demand is unproven.
  • Style: Does the code match the team style guide? This should rarely be controversial. The obvious assumption here is that your team should have a style guide.
There are some items I believe should only rarely be addressed during a code review:
  • Scope or mission feedback: “I don’t think you should be doing this project” is almost never a useful comment for a code review. If you think the team is embarking on projects that are not worthwhile, that is great feedback to share, but not in the context of a code review. The exception here is if someone is introducing a new way of doing something that is already well-handled in some other way.
  • Design review: A code review is not the time to evaluate the overall design of a project. For example, "I don't think you should be using the DB to store this data" is not useful. It is incumbent upon the developer to have their designs reviewed before implementation, and there will be scenarios in which the fundamental design is questioned during the implementation, but for a project that has been through a design review, let the results of that design stand.
  • Personal preference: “I would rather you do it my way” is an invitation to an unproductive debate. If you have a way that is demonstrably better, you should always argue for it. The hardest part about this point is identifying when a review has deteriorated to matters of personal preference; the hallmark I spot most often is when people are trading hypothetical scenarios in which alternative solutions might be advantageous, with no way of determining the likelihood of said scenarios. In these cases, the default is to use what the developer has already written.
How can you, as the developer, write your code in such a way as to make a code review? A few simple practices help.
  • Correctness: Comprehensive unit tests are the best demonstration that code functions as intended.
  • Complexity: Favoring small methods and cleanly-separated functional units makes it easy for your reviewer to see how everything fits together.
  • Consistency: When building new functionality, you can maximize the consistency of your code with existing work by taking the time to research how similar code solves similar problems. If you suspect someone else has solved the same problem before, ask!
  • Maintainability: Thorough commenting and the use of meaningful names throughout your code help ensure that others will be able to easily understand your code.
  • Scalability: My #1 recommendation in demonstrating the performance of new code is to just take 30 minutes and write a little driver to run your code through its paces. This can be total throwaway code, but simply being able to tell your reviewer that you’ve done a performance test makes this topic less debatable.
  • Style: The most important thing you can do to maintain style consistency is to configure your editor to implement your style guide. (As an aside, this also means that your team should adopt a style guide that is simple to automate in the editors used by the team.)
Even when everyone on the team follows these guidelines, there will frequently be strong debate during code reviews, and that’s a great thing - the point of these guidelines is to focus the debate on what matters.

All of this ignores some very tactical questions about code reviews like what code gets reviewed and what tools we use to aid that process. If you’re interested in hearing more about that, leave a comment and I will follow up with another post.