tallenxis Logo

What Changed When We Fixed Our Technical Hiring Process

Mar 30, 2026
Vlad
Author

There is a particular kind of organisational pain that comes from a technical hiring process that almost works. Not one that is catastrophically broken; those are easy to diagnose. But one that produces occasional good hires, enough to sustain the belief that the process is basically fine, while consistently losing candidates you wanted to keep, […]

There is a particular kind of organisational pain that comes from a technical hiring process that almost works. Not one that is catastrophically broken; those are easy to diagnose. But one that produces occasional good hires, enough to sustain the belief that the process is basically fine, while consistently losing candidates you wanted to keep, dragging on longer than it should, and leaving the engineering team exhausted and sceptical about whether it will ever improve.

The company in this story — a Series B fintech based in Amsterdam — had that kind of process. They had made good hires through it. They had also lost three strong candidates in six months, all at the offer stage, and run one search for a senior backend engineer that took four months and ended with a candidate who was fine but not what they had originally wanted. Their head of engineering described the process as “honestly, a grind.” Their HR lead described it as “we do our best with what we have.”

What followed, when they decided to look at the process honestly, is a useful illustration of how hiring technical talent changes when you stop accepting defaults and start designing deliberately.

 

The Before: What the Process Actually Looked Like

The process had accumulated over three years of the company’s growth and had never been designed as a whole — it had been assembled from parts. The job description template was adapted from one used when the team was eight people. The technical test was a take-home exercise that had been written by a developer who left the company eighteen months earlier and had never been revised. The culture interview was informal and conducted by whoever was available.

From the candidate’s point of view, the journey looked like this: they applied or were approached, waited three to five days for a response, had a thirty-minute screening call, were sent a four-to-six-hour technical test with a forty-eight-hour deadline, waited another week for feedback, had a ninety-minute technical interview with two engineers, waited another five to seven days for a debrief, had a culture conversation with the HR lead and one engineer, waited another three to four days, and then — if they were still interested — received a formal offer letter via email, usually about thirty-six days after first contact.

Thirty-six days. In a market where the strongest candidates for senior technical roles are typically off the market within two weeks of starting their search.

The head of engineering knew the timeline was long. What he had not mapped out clearly was where the time was actually going. When we did that mapping, the picture was illuminating. Eleven of the thirty-six days were waiting time — days where the candidate had responded or completed something and the company had not yet moved. Not deliberation time. Waiting time. Email not sent. Call not scheduled. Decision not communicated. Eleven days of unnecessary lag in a thirty-six-day process.

 

 technical hiring process

The Diagnosis: What Was Actually Breaking

Once the time audit was done, three root causes were clear.

The first was the technical test. Not because a technical assessment was wrong, but because this specific one was wrong for the goal. It had been written to test algorithmic problem-solving in a way that was not representative of what the role actually required. The problems were abstract. The expected answers were narrow. Strong candidates who worked through the test as an open-ended engineering problem — exploring the solution space rather than producing the expected output — were being scored poorly not because their thinking was weak but because the rubric was not designed for the way senior engineers actually work.

More relevantly for the timeline, the test was consistently deterring candidates who had the experience to know it was not a good use of their time. The drop-off rate at the test stage was forty percent. Nearly half the candidates who had been screened and were interested in the role were not completing the assessment. And when we looked at which forty percent were dropping off, they were disproportionately the candidates with the most options — the senior engineers who were in multiple processes simultaneously and who had the leverage to decline things that did not respect their time.

The second root cause was the absence of a structured timeline. Because no one had ever documented when each stage was supposed to happen relative to the previous one, each transition defaulted to the pace of the least urgent person involved. One engineer who sat on the debrief panel had a habit of not giving feedback until he had “thought about it for a few days.” This added three to five days to every cycle. Nobody had ever asked him to do it differently, because nobody had ever quantified what it was costing.

The third root cause was the offer process. The company required three levels of approval before an offer letter could be sent: the hiring manager, the CFO, and the CEO. For a company of their size, this had made sense when they were twelve people. At ninety people with ongoing technical hiring, it meant that every offer went through a bottleneck that took a minimum of four days and occasionally as long as ten.

 

What Changed: The Redesign

The redesigned process made four specific changes. Each one addressed a specific root cause. None of them required significant new resources.

First, the technical assessment was replaced. The four-to-six-hour take-home was retired. In its place: a sixty-minute live technical conversation structured around a real scenario from the company’s actual codebase. Candidates were told the topic in advance — a specific type of data consistency challenge the engineering team had recently worked through. They were invited to think about it beforehand and come with questions. The session was collaborative rather than evaluative in tone, designed to feel like the kind of problem-solving conversation the candidate would be having regularly if they joined.

Second, a maximum response time was introduced at every stage: twenty-four hours. From the moment a candidate completed a stage, someone from the company was responsible for communicating next steps within twenty-four hours. This was documented, assigned, and enforced. It eliminated the eleven days of waiting time from the thirty-six-day average.

Third, the offer approval process was restructured. The CFO and CEO approved a salary band for each role at the point of opening the search rather than at the point of offer. Once a candidate was selected, the hiring manager could extend a verbal offer and issue a formal letter within twenty-four hours without requiring additional sign-off, provided the compensation was within the pre-approved band. For out-of-band exceptions, a four-hour turnaround was established as the norm.

Fourth, the debrief process was formalised. Every interviewer completed a structured scoring sheet within two hours of their conversation. The debrief meeting was scheduled on the same day as the final interview rather than the day after. Decisions were made in the debrief rather than deferred.

 

 technical hiring process

The After: What the Process Looks Like Now

The average time from first contact to signed offer in the twelve months following the redesign: fourteen days.

The drop-off rate at the technical assessment stage: seven percent, down from forty percent.

The number of offers declined at the formal offer stage in the twelve months following the redesign: one, compared to three in the six months before.

The number of searches that exceeded twenty-five days: two out of eleven, both of which involved genuinely niche specialisms where sourcing took longer than usual. Neither was caused by process failure.

The head of engineering’s assessment of the change: “It feels like we are actually in control of it now. Before, hiring felt like something that happened to us. Now it feels like something we run.”

 

What This Means for Hiring Technical Talent in Your Organisation

The changes this company made were not expensive. They did not require new headcount, new technology, or a significant budget increase. They required two things: an honest look at where the current process was losing time and candidates, and the willingness to change the defaults that were causing those losses.

Both of those things are harder than they sound, because defaults have histories. The four-to-six-hour test had been written by an engineer who believed it was the right way to evaluate candidates. The three-level approval process had been put in place for good reasons at a different stage of the company’s growth. The informal debrief had never felt like a problem until someone measured how long it was taking.

When hiring technical talent, the biggest gains are usually not in any single part of the process. They are in the aggregate of small frictions that have accumulated over time without anyone noticing the cumulative cost. Identifying and removing those frictions — one at a time, with a clear rationale for each — is usually the highest-leverage thing a company can do to improve its technical hiring outcomes.

If you want to map your own process against these patterns, the step-by-step guide in this series is a useful starting point. And if you want support doing the kind of process audit and redesign that produced these results, that is work Tallenxis does with companies at this stage.

Unlock strategic HR solutions
that drive growth