What Years of Sitting on Both Sides of the Interview Table Taught Me

Introduction

I have sat in an interview chair — either side of the table — more times than I can count at this point. I’ve been the nervous candidate rehearsing answers on the commute. I’ve been the interviewer trying to assess someone’s depth in forty-five minutes. I’ve given feedback that I’m proud of and feedback I wish I could take back. I’ve rejected candidates who probably deserved a chance and selected candidates who didn’t work out the way I expected.

Interviewing is one of those things that everyone in the industry does constantly and almost nobody talks about with any real honesty. The candidate narratives are polished — the “tell me about a challenge you overcame” stories that have been refined through repetition into something smooth and slightly unreal. The hiring narratives are sanitised — “we went with someone whose profile more closely aligned with our needs,” which could mean almost anything.

This is the unpolished version. What the process actually taught me, from both seats.

The First Time I Failed an Interview

The first time I failed a technical interview that I expected to pass, I spent the drive home convincing myself the questions were unfair.

They weren’t. The questions were reasonable. I just didn’t know the answers as well as I thought I did.

That’s a specific and uncomfortable experience — the gap between the self-image you’ve built and the evidence in front of you. I had been writing Angular for a couple of years at that point. I thought I knew it well. The interviewer asked me to explain change detection — not just what it does, but how it works, what the implications of OnPush are, what zone.js is actually doing. I could gesture at the answer but I couldn’t give it clearly. The interviewer didn’t say anything unkind. They just moved on, and the silence was its own feedback.

What I took from that failure wasn’t the specific Angular knowledge gap — I filled that in the following weeks. What I took was a more honest accounting of the difference between familiarity and understanding. I was familiar with change detection. I had been using it for two years. But I hadn’t thought carefully enough about how it worked to explain it to someone who was testing for that understanding.

Familiarity is what lets you use a tool. Understanding is what lets you talk about why it works, where it breaks, and what tradeoffs it carries. Interviews test for the second thing. Production work, mostly, only requires the first. That gap is where a lot of confident developers get surprised.

After that failure, I started learning differently. Not learning more things — learning existing things more deeply. Asking not just “how do I do this” but “why does it work this way, and what does that mean for how I should use it.”

The interview didn’t just expose a knowledge gap. It exposed a learning gap.

The Interviews I Passed That I Shouldn’t Have

This is the one that’s harder to admit.

There are interviews I passed on confidence rather than competence. Not fraud — I could do the job. But there were specific questions where I bullshitted more than I knew, gave a partially correct answer with enough conviction that it read as complete, or steered the conversation toward territory I was more comfortable in before the interviewer could probe the area I was weakest.

I got good at that. Too good, at some point. The storytelling around experience is a skill separate from the experience itself, and I developed it. I could frame a project in terms that made my contribution sound more central than it was. I could name-drop technical concepts accurately enough that the interviewer moved on without testing the depth.

This gave me jobs I was capable of growing into, which mostly worked out. But I’ve thought about it a lot since I started sitting on the other side of the table, because now I’m the person being given partially correct answers with full conviction, and I know from the inside how it’s done.

The honest lesson here is uncomfortable: interviews are not pure assessments of competence. They’re partly assessments of how well someone can present competence under pressure. Those are related skills but they’re not the same skill. Some very good engineers interview badly. Some mediocre engineers interview very well.

Knowing this changed how I interview candidates.

Rejecting Someone I Shouldn’t Have

I want to tell this one specifically because I think about it more than any other single hiring decision I’ve made.

A few years into my career, I was part of an interview panel for a junior frontend role. We had a candidate who was clearly nervous — visibly so. The kind of nervous that makes answers come out in the wrong order, that makes a person start a sentence and correct themselves twice before finishing it. She knew the material. When she got to the end of a thought, the answer was usually right. But getting there was halting and difficult to follow.

We didn’t hire her. The feedback we gave internally — and I was part of giving it — was something like “communication skills need development.” Which was a way of saying she was nervous in the interview without acknowledging that nervousness in an interview is not the same thing as poor communication in a working environment.

I’ve worked with people since then who interviewed smoothly and communicated terribly once they were on the team — in standups, in documentation, in code reviews. And I’ve worked with people who were halting and awkward in their interview and turned out to be precise, thoughtful communicators once they were in a context where they weren’t being formally evaluated under pressure.

An interview is one of the most artificial contexts you can put a person in. You’re asking them to perform competence and communication simultaneously, in front of strangers, with their livelihood on the line, on a schedule, with no opportunity to look something up or take a moment to think without it reading as uncertainty. The people who do well in that context are not always the people who do well in the actual job.

I rejected someone based on interview-context nervousness and called it a communication assessment. I don’t think I would make that decision the same way now.

Selecting Someone I Wasn’t Sure About

On the other side: I’ve hired people I wasn’t fully confident in, and some of those turned out to be the best decisions I made.

There was a candidate for a mid-level role who, technically, was slightly below the bar I’d set in my head for the position. His fundamentals were solid but he hadn’t worked with the specific stack we were using, and some of his answers in the technical round showed gaps in areas I cared about. On paper, there were stronger candidates.

What he had that the stronger candidates didn’t was a quality I found hard to name in the moment but eventually landed on: he was genuinely curious about being wrong. When I pushed back on one of his answers during the interview — not aggressively, just “I’d think about that differently, here’s why” — he didn’t defend his original answer or defer politely and move on. He engaged with the pushback. He thought about it in real time and said “actually that makes more sense, I hadn’t thought about it from that angle.”

That quality — the willingness to update in public, to treat being corrected as information rather than a threat — is rarer than technical skill and harder to develop. Technical gaps close. The posture toward learning either is or isn’t there in a meaningful way.

He was one of the best hires I was involved in. His technical gaps closed within months. His curiosity and openness to feedback made him easy to mentor and easy to work alongside.

I nearly didn’t hire him because I was weighing the wrong things.

What Giving Feedback Taught Me

Delivering post-interview feedback is one of the things I did worst for a long time, and I want to be honest about the specific ways I did it badly.

Early on, I gave feedback that was vague to the point of uselessness. “We felt your experience wasn’t quite the right fit for where we are right now.” Which is a sentence that contains no information and serves primarily to avoid discomfort — mine, not the candidate’s. It protects the interviewer from having a difficult conversation while leaving the candidate with nothing they can actually act on.

I gave that kind of feedback because I told myself I was being kind. I wasn’t being kind. I was being cowardly and dressing it up as consideration.

The feedback that actually helps a candidate is specific and honest: the technical area where the answers weren’t deep enough, the communication pattern that was hard to follow, the specific question that revealed the gap. That feedback is harder to give. It risks a defensive response. It requires you to stand behind a judgment. But it’s the only feedback that gives the candidate something to do with it.

I got better at this slowly and mostly through receiving the other kind — the useless, vague, protective feedback — enough times as a candidate that I understood viscerally how it felt. You walk away knowing you didn’t get the job but not knowing why, which means you can’t fix anything, which means you’ll probably make the same mistakes in the next interview.

The best feedback I ever received as a candidate came from an interviewer who told me, plainly, that my answers on component design showed a good understanding of the component level but very little instinct for how components fit into larger architectural patterns — and that for the role they were hiring for, the architectural thinking was actually more important than the component-level depth. That was uncomfortable to hear. It was also precisely accurate, and I spent the next year deliberately working on exactly that gap. That interviewer gave me something genuinely useful. The ones who said “we went with someone whose profile more closely aligned” gave me nothing.

I try to give the honest version now. I don’t always succeed — there are still moments where I soften something past usefulness — but it’s what I’m aiming for.

What Receiving Feedback Taught Me

The feedback that stung most was not the harsh feedback. It was the accurate feedback delivered calmly.

Someone telling you that you talk too much in technical discussions — that you explain things to the point where you’ve answered the question and then kept going past it, filling silence that didn’t need filling — is not a devastating critique. It’s a fairly minor note. But when it’s accurate, when you recognize it as something you’ve been doing without realising, it lands in a particular way.

I had a pattern, for a period, of over-explaining in interviews. Not because I didn’t know the answer — because I was nervous and talking past the answer was how I managed the nerves. It read to interviewers as either lack of confidence (why are you still explaining if you’ve answered it?) or lack of structure (you can’t identify where the answer ends). Neither reading was flattering, and both were wrong about the cause while being right about the effect.

Someone told me this once, directly, after an interview. I didn’t love hearing it. I sat with it for a few days before I could admit it was accurate. Then I worked on it — not through some dramatic intervention, but just by practicing stopping when the answer was complete. Noticing the impulse to keep going and not following it.

The feedback changed something real. And I would never have known to change it without someone being willing to say it plainly.

This is what I try to remember when I’m deciding whether to give a candidate the honest version of the feedback: someone once gave me the honest version, and it actually helped. The comfortable version would have been easier to hear and useless.

The Candidate Who Changed How I Interview

There was a candidate who performed poorly in the technical round by any conventional measure. Got a live coding problem wrong. Misremembered an API. Described a concept in a way that was close but not quite right.

But in between those answers, he asked questions that were better than most of the questions I’d been asked by experienced developers. Not questions designed to seem clever — genuine questions about why we made specific architectural decisions, whether there were constraints behind choices that weren’t obvious from the outside, what the team found hardest about the current codebase. Real curiosity, not performance.

I started asking myself during that interview: if I hired this person and their technical knowledge filled in to the level I needed within six months — which, given the questions he was asking, seemed plausible — would I be happy with that hire? The answer was yes.

We didn’t hire him, in the end, because the consensus on the panel was that the technical gap was too significant for the timeline. I still wonder if that was the right call.

What he changed was the weight I give to the quality of questions a candidate asks. It’s now one of the things I’m most attentive to in interviews. The questions someone asks reveal their actual mental model — where the edges of their understanding are, what they’re curious about, how they think about problems they don’t yet know the answer to. That information is at least as valuable as the answers they give, and it’s harder to rehearse.

A candidate who asks good questions and gets some answers wrong is often more promising than a candidate who gets all the answers right and asks nothing.

The Lie of the Perfect Interview Process

Something I’ve come to believe, and that I hold with some frustration, is that no interview process reliably identifies the best engineers.

I’ve seen engineers who were genuinely excellent — deep, creative, careful thinkers — fail technical interviews because the format didn’t suit how they process problems. Live coding under observation is not how most developers work, and some very good developers freeze in that format. Systems design on a whiteboard in forty minutes rewards people who have rehearsed systems design interviews, which is its own skill adjacent to but not identical to the skill of actually designing systems.

I’ve seen mediocre engineers pass interviews consistently because they’re good at the performance of competence in constrained time windows. They know how to structure an answer. They’ve done enough interviews that the format is familiar. They project confidence without necessarily having the depth that the confidence implies.

The honest position is that interviews are a noisy signal. They give you some information about a candidate, and that information is better than nothing, but it’s significantly less reliable than the industry tends to assume. The best signal I’ve ever had about how someone will perform in a role is watching them do actual work — a paid trial, a realistic take-home that respects their time, a collaboration on a real problem rather than a contrived one.

The industry mostly doesn’t do that, for reasons that are mostly logistical and somewhat cultural. So we interview, and we try to make the interviews as informative as possible, and we accept that we will sometimes be wrong in both directions.

Accepting that imperfection doesn’t mean giving up on trying to interview well. It means holding your hiring decisions with appropriate humility.

What I Actually Look For Now

After all of this — the failures, the passes, the hires I regret, the hires I’m proud of — the things I look for in an interview have shifted significantly from where they started.

I used to look primarily for technical depth. Does this person know the things I need them to know? Can they answer the questions I think matter?

I still care about technical knowledge. You need a sufficient level of it for the role to work. But I’ve moved technical depth further down the priority list and moved other things up.

How does someone respond to being wrong? Do they defend, deflect, or engage? The developers who engage — who treat a correction as information rather than a challenge — are the ones who grow fastest and are easiest to work alongside.

How do they think about problems they don’t know the answer to? Do they fall silent, or do they reason out loud, make their thinking visible, ask clarifying questions? The reasoning process is more informative than the final answer, and it’s also closer to what the job actually requires day to day.

What do they notice about the work they’ve done? Can they articulate what they would have done differently? A candidate who tells me only what went right is either not being honest or not reflecting enough. The developers I respect most can talk about their own work with genuine critical distance.

And the questions they ask. Always, the questions they ask.

Conclusion

Interviewing is imperfect in both directions. You will fail interviews you deserved to pass. You will pass interviews you didn’t fully deserve. You will make hiring decisions that look right and turn out wrong, and decisions that felt uncertain and turned out to be among the best you made.

The thing that’s helped me most, on both sides, is treating each interview as a source of information rather than a verdict. As a candidate: what does this process tell me about the gap between what I know and what I need to know, between how I come across and how I intend to? As an interviewer: what is this person actually showing me, underneath the performance of the interview itself?

The verdict framing — pass or fail, hire or reject — is real and consequential. I’m not minimising it. But if that’s all you take from each interview, you’re leaving a lot of learning on the table.

I’ve learned more about how I think, how I communicate, and what I actually value in engineering from interviews — the ones I failed, the ones I shouldn’t have passed, the candidates I misjudged in both directions — than from almost any other single professional experience.

That’s more than I expected when I walked into the first one.