The Tiger Hiring Algorithm

measure21

I’m going to try to do something hard: simplifying hiring down to a dead simple algorithm based on only four data-backed, high-signal attributes.

I’ve spend the last five years doing a considerable amount of hiring, and I think I may have unearthed some degree of wisdom when it comes to predictors of success.

[ NOTE: If you know anything about hiring you should be extremely skeptical about that last sentence. ]

What I am going to attempt is to only capture a few things about a person, and make a judgment based on those things alone. The good news is that hiring systems are already horrible, so there’s not much to mess up.

Here are the markers I’m looking to use:

  • Talent: The candidate’s passion, creativity, innovation, intelligence, and overall ability to build or produce new things. Rating: 1 – 5

  • Grit: The candidate’s ability to finish tasks, projects, or any other item on their life plate. Rating: 1 – 5

  • Aesthetic: This measures the quality of the output from the candidate. It’s related to grit, but is different enough to be distinct. Think about the quality of writing, the quality of presentations, the attention to detail, etc. Essentially, how much work needs to be done on any given deliverable produced by that person after they give it to you. Does it need a team to work on it for a few hours, or could it go straight to the New York Times or Wall Street Journal and be accepted?. Rating: 1 – 5

  • Reputation: This one has a tricky name. It doesn’t mean they’re respected for just anything. This is specifically testing the degree to which the people in their past would work with them again and/or would recommend that you hire them. Rating: 1 – 5

Now comes the elegant, brilliant, and/or completely asinine part: the 1-5 ratings are mapped to grades. A 5 is an A-player in that category, 4 is B, 3 is C, 2 is D, and 1 is F.

So it’s TGAR: Talent, Grit, Aesthetic, Reputation, or the Tiger Algorithm. At the highest level, one’s rating is simply their lowest rating in any category. It’s harsh but powerful.

If someone is labeled as a B-player, you know they are at least a 4 or higher in every category. That’s a star. And an A-player now suddenly has a lot of meaning.

You can still use individual category ratings to great effect, however, if you’re careful about how you apply them. You can for example put high-grit people on certain tasks if you know they’re not creative—but only if you understand the job really well.

Conversely, you can put high-talent, high-passion people on certain jobs even if they’re low-grit—but again—only if you know exactly what you’re doing and have properly matched the person to the task.

Talent factors

These are just placeholders, and can be adjusted as needed.

Talent questions

  1. What kind of things have you built or wanted to build? Could be games, teams, arts, crafts, bands, songs, poems, cakes, tree houses, robots, stories, books—whatever.

  2. Do you have a GitHub account with any projects we could take a look at? Tell me about them. What problem did they solve, and how did you approach it?

Talent inputs

  • Google them and look for interesting / creative things they have done or not done

Grit factors

These are just placeholders, and can be adjusted as needed.

Grit questions

  1. Tell me about some projects you’ve started over the last few years.

  2. Sounds exciting! What’s the current state for these? (you’re looking to hear that they complete things to a significant degree)

[ NOTE: Keep in mind that lots of people who complete projects have far more that aren’t complete. We’re not looking for a robot. We’re just NOT looking for people who have lots of ideas but never get them to a usable state. ]

Grit inputs

  • This is an area where you don’t get great signal from the person in an interview—especially if they know what you’re aiming at. It’s better to look for evidence of them finishing projects in the real world

Aesthetic factors

Aesthetic is another area where you don’t get great signal from the person in an interview—especially if they know what you’re aiming at. It’s better to look at output they’ve actually created either for you in a work sample or (ideally) in the real world.

Look at a long history of work for them, focusing on key markers for overall quality in the work such as clarity, grammar, spelling, typography, and polish.

Reputation factors

It’s possible to get some negative indicators about reputation (remember, we’re talking about how much others would want to work with them, not whether or not they’re known for being good at something)

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

[ NOTE: Remember that we’re looking for how much co-workers in their past would want to work with them, not whether they are known for being good at something. These are very much different things. The latter goes into talent, not reputation. ]

The best way to gather this information is:

  1. Talking to people they’ve worked with in the past and try to get the least biased information you can. You can usually tell pretty quickly if someone isn’t a fan, and based on who it is you can often tell whether it’s justified.

  2. Look at how often they move jobs, and dig in if you see lots of hops. People at those jobs may have good information for this factor.

Ways to test the algorithm

Algorithms are worthless if they don’t help you solve problems. There are a few different ways to test the effectiveness of this system.

  1. You can try to hire people using it and see how they do over time

  2. You can take a look at everyone you’ve hired over the last time period X and see how they actually did, and then rate them using this system to see if it would have predicted success

I suggest #2, as that is precisely the method I used to make the algorithm.

Stated differently, I am trying to get away from having ideas about what makes good a good hire and instead figure out what known-good employees have, and then filtering for that.

Only if it works for case #2 should you try to use it for case #1.

Next steps

Anyway, like I said, it’s ambitious, and you should be skeptical. But I’m optimistic and eager to hear any feedback.

My next step is to try to automate as much of this as possible, e.g. to be on a website where I have questions and a rating pull-down that gives me the Tiger (TGAR) score upon completion.

Let me know if you’re interested in participating.

Notes

  1. I cannot stress enough 1) how skeptical you should be of this, and 2) how optimistic you should be about this.

  2. This is similar to a similar project of mine called the TG Rating System, just a bit more fleshed out.

  3. I am grappling with whether to separate Passion into its own category or keep it integrated into Talent. I’d hate to mess up the TGAR (Tiger) thing, but that’s a ridiculous reason to exclude it if it should be broken out.

Related posts: