Issue159: Your last bad hire had a great CV, didn't they? Ft. Angus Crombie, Head of Product - Calyptus

Author :
Nishant Singh
May 3, 2026

In this edition of Coffee with Calyptus, we sit down with Angus Crombie, Head of Product at Calyptus, to dig into why hiring is still so painfully slow and what it actually takes to fix it. Angus breaks down how video interviews surface what CVs simply cannot, why "cultural fit" only means something when employers stop being vague about it, and how AI fluency has stopped being a nice-to-have and become the baseline. If you have ever lost a great candidate to a slow process, this one is for you.

Fourteen days from dropping a job description to landing a hire is a bold claim in an industry where the average time-to-hire can stretch to months. What actually makes that possible, and where do most platforms go wrong that makes the process so slow in the first place?

Most platforms slow things down by dumping a pile of CVs in front of a hiring manager and leaving them to figure it out. There's no structure, no filtering, and the back-and-forth with candidates is almost entirely manual. That's why things can take so long. At Calyptus, a combination of automation and a large, rich candidate database means the heavy lifting happens before a human even needs to get involved. Screening, follow-ups, reminders - all handled.

The bigger piece is decision quality. The more useful information an employer has upfront - including from video interviews, which I'll come to - the faster and more confidently they can move. By the time they're speaking to someone, they already know it's worth their time. Over 80% of employers on the platform invite someone to interview within the first three candidates they review - that's not luck, it's what better upfront information does.

Fourteen days is what happens when you remove the friction that everyone else has just accepted as normal.

Most hiring tools default to text and static profiles. Building video interviews as the core of the candidate experience is a non-trivial product bet. From where you sit, what are the hardest problems to solve to make that work reliably, and what does video surface about a candidate that everything else simply misses?

The hardest product problem is getting candidates comfortable enough to actually show that side of themselves. AI interviews are still relatively new, and there's a real anxiety around them that we take seriously. A big part of our work has been making the experience feel like a human interview - natural, low-pressure, not clinical. What we've seen is that candidates are getting much more used to it, and those doing one for the first time are overwhelmingly surprised at how good the experience is. It's just a matter of time.

In terms of CV vs video - a CV does a good job of telling you what someone has done. Video gives you a much better idea of how they think, how they communicate, and whether you'd actually want to work with them. You get much richer information on skills, drivers, and genuine interest in a role - things that are impossible to capture in a static profile. We had a candidate recently applying for a mid-level backend role. Nothing on their CV stood out particularly - solid but unremarkable. In their video they mentioned they'd been quietly building a side project using the same stack the employer was migrating to. That came up in thirty seconds of conversation and they got the job. That never gets onto a CV.

Cultural fit is one of those concepts everyone in hiring talks about but almost nobody can pin down precisely. How did you go about translating something that fuzzy into an actual product feature, and how do you make sure it adds signal rather than just noise?

We're the first to admit we're at v1 on this. Cultural fit is genuinely hard to operationalise, and anyone claiming they've fully solved it is probably overselling. What we do have is a screening interview format that draws out far more about a candidate than a CV ever could - how they talk about their work, what they value, how they handle certain situations. That's a much richer starting point than most tools offer.

The catch is that it only works if employers are engaged and specific about what they're actually looking for. Vague inputs produce vague outputs. One client, a Series B fintech, came to us pretty frustrated with previous hires not sticking around past six months. When we pushed them to actually describe what made their best people successful there, it wasn't technical skills - it was people who were comfortable with ambiguity and didn't need a lot of direction. We fed that into the screening and filtered specifically for it. Their last three hires through us are all still there. Where we're heading is using AI reasoning to improve the matching layer over time - but we're being deliberate about how we approach that. The goal is more signal, not more noise.

AI can improve this with reasoning going forward hopefully, but got to be careful how we approach this

There is a growing conversation about whether AI fluency is becoming a baseline requirement across most roles. How is that changing what employers are asking for, and how does Calyptus help surface that in a way that is actually meaningful rather than just a checkbox?

It's not really a conversation anymore - it's a given. And it's not just engineers. Sales teams, marketers, operations people - every function is being affected, and employers know it. The question has shifted from "does this candidate know AI exists" to "can they actually use it to do better work." Those are very different things to assess.

We've built our own AI fluency tests directly into the platform so employers can evaluate this in a way that's integrated and straightforward. The tests adapt based on your job description - immediately suggesting relevant options or giving you the ability to build custom ones yourself. Candidates then interact with an AI bot that asks probing questions much like a real interview would, and we analyse the quality of their questions, how they communicate, and the quality of their output. Early feedback from both candidates and hiring teams has been really positive, and it's a feature that's only going to become more central to how people hire.

Product teams at fast-moving startups are constantly pulled between reacting to what clients need today and building toward a longer-term vision. How do you hold that balance, and how do you decide what actually makes it onto the roadmap?

It's a constant prioritisation battle and a lot depends on where feedback is coming from, how frequently you're hearing the same thing, and capacity. Internally, we keep communication tight across the team - everyone understands why something got prioritised over something else, which keeps the roadmap credible and decisions fast. Externally, we talk to clients regularly, understand what's most important to them, and know what we need to build well in advance rather than reacting after the fact.

A good recent example - we had three clients in the same month ask for better visibility into where candidates were dropping off in the process. It wasn't on our immediate roadmap but the consistency of that feedback made it an easy call to pull forward. We shipped a basic version within the next sprint. Those same clients have since referred others. Listening closely to what's causing friction tends to be a better roadmap signal than almost anything else.

This flexibility is really important and is baked into how we operate. The ability to react quickly is itself a product advantage when you're up against slower-moving competitors.

We hope you enjoyed this edition of Coffee with Calyptus. Stay curious, stay inspired, and keep building what matters. Explore more editions and insightful articles at https://www.calyptus.co/blog.