The best interview process I've ever been a part of involved pair programming with the person for a couple hours, after doing the tech screening having a phone call with a member of the team. You never failed to know within a few minutes whether the person could do the job, and be a good coworker. This process worked so well, it created the best team, most productive team I've worked on in 20+ years in the industry, despite that company's other dysfunctions.
The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"
Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.
A junior, mid, senior, staff are going to see very different things in the same codebase.
Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.
I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)
Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?
There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.
For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.
It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.
I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice. That discussion went on for a good few minutes and by the end of it I was teetering on leaving because the fit wasn't good.
I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).
The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.
> Using apps like GitHub Co-pilot and Cursor to auto-complete code requires very little skill in hands-on coding.
this is a crazy take in the context of coding interviews. first, because it's quite obvious if someone is blindly copy and pasting from cursor, for example, and figuring out what to do is a significant portion of the battle, if you can get cursor to solve a complex problem, elegantly, and in one try, the likelihood that you're actually a good engineer is quite high.
if you're solving a tightly scoped and precise problem, like most coding interviews, the challenge largely lies in identifying the right solution and debugging when it's not right. if you're conducting an interview, you're also likely asking someone to walk through their solution, so it's obvious if they don't understand what they're doing.
cursor and copilot don't solve for that, they make it much easier to write code quickly, once you know what you're doing.
I've accidentally been using an AI-proof hiring technique for about 20 years: ask a junior developer to bring code with them and ask them to explain it verbally. You can then talk about what they would change, how they would change it, what they would do differently, if they've used patterns (on purpose or by accident) what the benefits/drawbacks are etc. If they're a senior dev, we give them - on the day - a small but humorously-nasty chunk of code and ask them to reason through it live.
Works really well and it mimics the what we find is the most important bit about coding.
I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.
Nowadays I am on the other part of the fence, I am the interviewer. We are not a FAANG, so we just use a SANE interview process. Single interview, we ask the candidate about his CV and what his expectations are, what are his competences and we ask him to show us some code he has written. That's all. The process is fast and extremely effective. You can discriminate week candidates in minutes.
What I've been thinking about leetcode medium/hard as a 30-45 minute tech interview (as there are a few minutes of pleasantry and 10 minutes reserved for questions), is that you are only really likely to reveal 2 camps of people—taking in good faith that they are not "cheating". One who is approaching the problem from first principles and the other who knows the solution already.
Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.
I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.
I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?
0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).
The problem isn't AI, the problem is companies don't know how to properly select between candidates, and they don't apply even the basics of Psychometrics. Do they do item analysis of their custom coding tests? Do they analyse the new hires' performances and relate them to their interview scores? I seriously doubt it.
Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.
I mostly skipped the technical questions in the last few interviews I have conducted. I have a conversation, ask them about their career, about job changes, about hobbies, what they do after work. If you know the subject, skilled people talk a certain way, whether it is IT, construction, sailing.
I do rely on HR having, hopefully, done their job and validated the work history.
I do have one technical question that started out as fun and quirky but has actually shown more value than expected. I call it the desert island cli.
What are your 5 linux cli desert island commands?
Having a hardware background, today, mine are: vi, lsof, netcat, glances, and I am blanking on a fifth. We have been doing a lot of terraform lately
I have had several interesting responses
Manager level candidate with 15+ years hands on experience. He thought it was a dumb question because it would never happen. He became the teams manager a few months after hiring. He was a great manager and we are friends.
Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job.
Senior level SRE hire with a software background. He only needed two emacs and a compiler, he could write anything else he needed.
My last interview, for the job I'm currently employed in, asked for a take home assignment where I was allowed to use any tool I'd use regularly including AI. Similar process for a live coding interview iterating on the take home that followed. I personally used it to speed up wirting initial boilerplate and test suites.
I fail to see why this wouldn't be the obvious choice. Do we disallow linters or static analysis on interviews? This is a tool and checking for skill and good practices using it makes all sense.
There’s no other industry* that interviews experienced people the way we do. So maybe just do what everyone else does.
Everyone is so terrified of hiring someone that can’t code, but the most likely bad hires and the most damaging bad hires are bad because of things that have nothing to do with raw coding ability.
*Except the performing arts. The way we interview is pretty close to the way musicians are interviewed, but that’s also really similar to their actual job.
I've been arguing that "AI" has very little impact on meaningful technical interviews, that is ones that don't test for memorization of programming trivia: https://blog.sulami.xyz/posts/llm-interviews/
Prediction: faangs will come up with something clever or random or just fly everyone onsite, they are so rich and popular, they can filter by any arbitrary criteria.
Second-rate companies will keep some superficial coding, but will start to emphasize more of the verbal parts like system design and retrospective. Which sucks, because those are totally subjective and mostly filters for whoever can BS better on the spot and/or cater to the interviewer's mood and biases better.
My favorite still: in-person pair programming for a realistic problem (could be made-up or shortened, but similar to the real ones on the job). Use whatever tools you want, but get the correct requirements and then explain what you just did, and why.
A shorter/easier task is to code review/critique a chunk of code, could even just print it out if in person.
This whole conversation is depressing me. when I left work a couple years ago due to health reasons, AI was just beginning to become a problem. Now, thanks to a clinical study, I may possibly be able to return to work, and it sounds like the industry has changed drastically.
How about paid internships as a way to filter candidates? As in, hire a candidate for a small period of time, like 2 weeks or something, and have them work on a real task with a full-time employee and use their performance on that to decide whether or not to hire.
I realize it's not easy for smaller companies to do, but I think it's the single best way to see if someone's fit for a job
Our tech screen is having the candidate walk me through a small project that I created to highlight our stack. They watch my screen and solve a problem to get the app running then they walk me through a registration flow from the client to the server and then returning to the client. There are no gotchas but there are opportunities to improve on the code (left unstated... some candidates will see something that is suboptimal and ask why or suggest some changes).
We get to touch on client and browser issues, graphQL, Postgres, Node, Typescript (and even the various libraries used). It includes basic CRUD functionality, a third party API integration, basic security concerns and more. It's just meant to gauge a minimal level of fluency for people that will be in hands on keyboard roles (juniors up to leads, basically). I don't think anyone has found a way to use AI to help them (yet) but if this is too much for them they will quickly fall flat in the day to day job.
Where we HAVE encountered AI is in the question/answer portion of the process. So far many of those have been painfully obvious but I'm sure others were cagier about it. The one incident that we have had that kind of shook us was when someone used a stand-in to do the screen (he was fantastic, lol) and then when we hired him it took us about a week to realize that this was a team of people using an AI avatar that looked very much like the person we interviewed. They claimed to be in California but were actually in India and were streaming video from a Windows machine to the Mac we had supplied for Teams meetings. In one meeting (as red flags were accumulating) their Windows machine crashed and the outline of the person in Teams was replaced by the old school blue screen of death.
I'm someone who hated leetcode style interviews for the longest, but I'm starting to come around on them. I get that these style of questions are easy to game, but I still think they have _some_ value. The point of these style of questions was supposed to test your ability to problem solve and come up with a good solution given the tools you knew. That being said, I don't think every company should be using this type of question for their interviews. I think leetcode style questions should be reserved for companies that are pushing the boundary of the industry since they're exploring charted territory and need people who can come up with unique solutions to problems no one really knows. I think most companies would be fine with some kind of pairing problem since most people are probably solving engineering problems instead of computer science problems. But none of this matters, since, we all know that even if we went that direction as an industry, the business people would fuck it up some how anyways.
I had no idea people took hackerrank as a serious signal rather than as a tool for recent graduates to track interview prep progress. Surely it has all the same issues AI does: you have no way of verifying that the person who takes the interview actually is responsible for that signal.
I don't see AI as a serious threat to the interview process unless your interview process looks a lot like hackerrank.
I feel like we, SWEs, have been over-engineering our interview process. Maybe it's time to simplify it, for example, just ask questions based on the candidate's resume instead of coming up with random challenges. I feel like all the new proposals seem overly complicated, and nobody, interviewer or interviewee, is happy with any of them.
Licensing. We do the Leetcode interview in a controlled testing center. When you apply for a position, I look up your license number, then I know you can leetcode without wasting any of my developer resources on determining that.
When it comes to interviews I generally stick to asking fairly easy questions but leaving out key details, I care a lot more about candidates asking following ups and talking through the code they are writing over what code they actually produce. If a candidate can ask questions when they don't understand something and talk through their thought process then they are probably going to be a good person to work with. High level design questions are often pretty valuable I find as well, which I usually don't require code for I just ask them to talk through their ideas of how they would design an application.
In my uni days, I respected professors who designed exam in a way where students can utilize whatever they could to complete the assignment, including internet, their notes, calculators, etc.
I think the same applies to good tech interview. Company should adapt hiring process to friend with AI, not fight.
nah ai killed stupid tech interviews. you can easily get an idea of someones competence by literally just talking to them instead of making them do silly homework exercises and testing their rote memorisation abilities.
thank fuck. they are terrible. being interviewed by CTOs just out of university, with no experience for a senior in everything role. they ask you to do some lame assignment, a pet problem not once looking at 20 years of GitHub repos and open source contributions.
So, when AI can pass the tech interview seamlessly, I guess we can just hire it?
Maybe the future will be human shills pretending to be job candidates for shady AI “employment agencies” that are actually just (literally) skinning gpt6 apis that sockpuppet minimum wage developing nation”hosts”?
I don't think it did, if anyone cares.
The way I've been advocating to my colleagues who are concerned about "cheating" is that there's probably a problem with the interview process.
I prefer to focus on the think, rather than the solve.
Collaborate, as opposed to just do.
Things that really tell me if I can work with that person and if together, we can make good things.
Unless the job you're interviewing for is remote-only, this makes perfect sense. If you expect your candidates to be able to work in your office, they should be interviewed there.
I think that a mythology about where the difficulty in working with computers lies has made the relationship between businesses and the people they hire to do this stuff miserable for quite some time
"Coding", as in writing stuff in programming languages with correct syntax that does the thing asked for in isolation, has always been a very dumb skill to test for. Even before we had stackoverflow syntactic issues were something you could get through by consulting a reference book or doing some trial and error with a repl or a compiler. That this is faster now with internet search and LLMs is good for everyone involved, but the fact that it's not what matters remains
The important part of every job that gets a computer to do a thing is a combination of two capabilities: Problem-solving, that is, understanding the intended outcome and having intuition about how to get there through whatever tools are available, and frustration tolerance: The ability to keep trying new* stuff until you get there
Businesses can then optimize for things like efficiency or working well with others once those constraints are met, but without those capabilities you simply can't do the job, so they're paramount. The problem with most dinky little coding interviews wasn't that you could "cheat", it's thst they basically never tested for those constraints by design, though some clever hiring people manage to tweak them to do so on an ad hoc basis sometimes
* important because a common frustration failure mode is repetitive behavior. Try something. Don't understand why it doesn't work. Get more frustrated. Try the same thing again. Repeat
Funny enough, the songs from the website Coding For Nothing about grinding LeetCode and endless take-home projects seem very relevant, and everything nowadays feels like a meme.
Tech interviewing has become a weird survival game, and now AI is flipping the rules again. If you need a laugh: https://codingfornothing.com
One option is to make the interviews harder and let candidates use ai to see how they can work with ai and actually build working product. They will be using ai in the job anyway so let them use it instead of asking stupid algorithm questions to sort an array
So there’s AI that’s really good at doing the skills we’re hiring for. We want you to not use AI so we can hire you for a job that we’re saying we’re going to replace with AI. Sounds like a great plan.
Maybe we don't need employers. Maybe we need a bunch of 1-person companies. I don't think AI is yet the force multiplier that makes that feasible for the masses, but who knows what things will look like in a few years.
I think code design can often cover just as much as actual code anyway. Just describe to me how your solve it, the interfaces you'd use, and how you'd show me you solved it.
As an interviewee it's insane to me how many jobs I have not gotten because of some arbitrary coding problem. I can confidently say after having worked in this field for over a decade and at a FAANG that I am a very capable programmer. I am considered one of the best on every team I've been on. So they are definitely selecting the wrong people IMO.
* Take a candidate's track record into account. Talk with them about it.
* Show that you're experienced yourself, by being able to tell something about what someone would be like to work with, by talking with them.
* Get a reputation for your company not tolerating dishonesty. If someone cheats in an interview and gets caught, they're banned there, all the interviewers will know, and the cheater might also start to get a reputation beyond that company. (Bonus: Company reputation for valuing honesty is attractive to people who don't want dishonest coworkers.)
* Treat people like a colleague, trying to assess whether it's a good match. You're not going to be perfectly aligned (e.g., the candidate or the company/role might be a bit out of the other's league right now), but to some degree you both want it to be a good match for both parties. Work as far as you can with that.
(Don't do this: Leetcode hazing, to establish the dynamic of them being there to dance for your approval, so hopefully they'll be negged, and will seek your approval, won't think critically about how competent and viable your self/team/company are, and will also be less likely to get uppity when you make a lowball offer. Which incidentally places the burden of rehearsing for Leetcode ritual performances upon the entire field, at huge cost.)
We did an experiment at interviewing.io a few months ago where we asked interviewees to try to cheat with AI, unbeknownst to their interviewers.
In parallel, we asked interviewers to use one of 3 question types: verbatim LeetCode questions, slightly modified LeetCode questions, and completely custom questions.
- Interviewers couldn't tell when candidates were cheating at all
- Both verbatim and slightly modified LeetCode questions were really easy to game with AI
- Custom questions were not gamable, on the other hand[1]
So, at least for now, my advice is that companies put more effort into coming up with questions that are unique to them. It's better for candidates because they get better signal about the work, it reduces the value asymmetry (companies have to put effort into their process instead of just grabbing questions from LeetCode etc), and it's better for employers (higher signal from the interview).
[1] This may change with the advent of better models
The death of shitty interviews has been greatly exaggerated.
AI might make e.g. your leetcode interview less predictive than it previously would have been. But was it predictive in the first place? I don't think most interviews are written by people thinking in those terms at all. If your method of interviewing never depended on data suggesting it actually, you know, worked in the first place, why would it matter if it starts working even worse?
Insofar as it makes the shittiness of those interviews more visible, the effect of AI is a good thing. An interview focused on recall of some specific algorithm was never predictive, it's just now predictive in a way that Generic Business Idiots can understand.
We frequently interview people who both (a) claim to have been in senior IC roles (not architect positions, roles where they are theoretically coding a lot) for many, many years and (b) cannot code their way out of a paper bag when presented with a problem that requires even a modicum of original reasoning. Some of that might be interview nerves, of course, but a lot of these people are not at all unconfident. They just...suck. And I wonder if what we're seeing is the downstream effects of Generic Business Idiots hiring primarily people who memorize stuff than people who build stuff.
The inconvenient truth is that everything circles back to in-person interviews.
The article addresses this:
>A lot of companies are doing RTO, but even companies that are 100% in-office still interview candidates from other cities. Spending money to fly every candidate out without an aggressive pre-screen is too wasteful.
No, accidently hiring someone who AI'd their way through the interview costs orders of magnitude more to undo. It's absolutely worth paying for a round trip flight and a couple days of accommodations.
1point3acres is massacring tech interviews right now. Having to pay $80/month to some China based website where NDA-protected interview questions are posted regularly, then being asked the same questions in the interview, seems insane.
It also feels like interviewers know this and assume you studied the questions, they seem incapable of giving hints, etc if you don't have the questions memorized.
Very funny :) I too failed an interview at google, also related to binary search on a white board. I never write with pens. I'm on keyboards the whole time, my handwriting is terrible.
I've built a search engine for two countries and then I was failed by a guy that wears cowboy hats to work at google in Ireland. Not a lot of cows there I'm guessing. (No offence to any real cowboys that work at google of course).
I did like the free flight to Ireland though and the nice lunch. Though I was disappointed I lost "Do no evil" company booklet.
The best interview process I've ever had was going to work with former coworkers, aka no real process. A couple of quick calls with new people who deferred strongly to the person who knew me, my work, and my values. Nothing else has the signal value.
Of course the problem is this can't scale or be outsourced to HR, but is this a bug or a feature?
The best interview processes are chill, laid back, open ended.
That's the only way you're going to get relevant information.
I've been verified to the moon and back by Apple and others for roles that could never have worked.
The problem is that when it comes to the hiring process, everyone is suddenly an expert; no matter how dysfunctional, inhumane and destructive their ideas are.
Anyone who suggests a paired programming solution is right, and answering the wrong question. Unless/until we return to a covid-like market the process will never be optimized for the candidate, and this is just too expensive an approach for employers. In this market I think the answer is hire less.
I just ask to share a text editor and write down my questions. Its critical anyway because often then not its not always clear for tech questions what exactly i asked (linux command for example).
This blocks their screen too.
and yes we do know very soon if you look somewere else, take time or rephrase the question to get more time.
If you able to fake it, at that point you should just get th ejob anyway :P
Interestingly I find AI is actually better at that kind of CS whiteboard question (implementing a binary search tree) than that "connecting middlewares to API" type task. Or at least, it's more straightforward to apply the AI, when you want a set of functions with clearly defined interfaces written - rather than making a bunch of changes across existing files.
I've been considering using a second webcam stream focused on my screen just to assure hiring managers that I don't have ChatGPT on my screen, or anywhere else. Kind of like chess players do it sometimes on online tournaments. I've been hearing people complain about cheating a lot.
If using AI is cheating then one solution as the author mentions is have the interview take place at an office but I'm surprised another approach isn't more readily available: having the candidate the the test remotely at a trusted 3rd party location.
I've been interviewing a bunch of developers the past year or so, and this:
> Architectural interviews are likely safe for a few years yet. From talking to people who have run these, it’s evident that someone is using AI. They often stop with long pauses, do not quite explain things succinctly, and do not understand the questions well enough to prompt the correct answer. As AI gets better (and faster), this will likely follow the same fate as the rest but I would give it some years yet.
Completely matches my experience. I don't do leet code BS, just "let's have a talk". I ask you questions about things you tell me you know about, and things I expect of someone at the level you're selling yourself at. The longest it's taken me to detect one of these scumbags was 15 minutes, and an extra 5 minutes to make sure.
Some of them make mistakes that are beyond stupid, like identity theft of someone who was born, raised and graduated in a country whose main language they cannot speak.
The smartest ones either do not know when to stop answering your questions with perfect answers (they just do not know what they're supposed to not know), or fumble their delivery and end up looking like unauthentic puppets. You just keep grinding them until you catch em.
I'm sure it's not infallible, but that's inherent to hiring. The only problem with this is cost, you're going to need a senior+ dev running the interview, and IME most are not happy to do so. But this might just be what the price of admission for running a hiring pipeline for software devs is nowadays. Heck, now feels like a good time to start a recruitment process outsourcing biz focused on the software industry.
I never understood why Big Tech never setup contracts with all the SAT and ACT test centers across the country. Even before Zoom with Codepads it would have made sense for the recruiters to send potential candidates to a test center to do a pre assessment rather than waste time with engineers sitting on prescreen calls all day.
I miss one option from the list of non-solutions the author presents there - ditch the idiotic whiteboard/"coding exercise" interview style. Voila, the AI (non)problem solved!
This sort of comp-sci style exam with quizzes and what not maybe somewhat helps when hiring junior with zero experience fresh out of school.
But why are people with 20+ years of easily verifiable experience (picking up a phone and asking for references is still a thing!) being asked to invert trees and implement stuff like quicksort or some contrived BS assignment the interviewer uses to boost their own ego but with zero relevance to the day to day job they will be doing?
Why are we still wasting time with this? Why is always the default the assumption there that the applicants are all crooked hochstaplers that are lying on their resumes?
99% of jobs come with probationary period anyway where the person can be fired on the spot without justification or any strings attached. That should be more than enough time to see whether the person knows their stuff or not after having passed one or two rounds of oral interviews.
It is good enough for literally every other job - except for software engineering. What makes us the special snowflakes that people are being asked to put up with this crap?
Never really liked leetbro interviews. Always reeked of
“SO YOU THINK YOU CAN CODE BRO? SHOW ME WHAT YOU GOT!”
The majority of my work over 10+ years of experience always relied on general problem solving and soft skills like collaborating with others. Not rote memorization of in order traversal.
> Tech interviews are one of the worst parts of the process and are pretty much universally hated by the people taking them.
True.
> One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go.
"One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go."
If that's where the things are going, I'm retraining to become a line cook at McDonalds.
The image below does sum it up but not in the way the author thinks.
Google wants to hire people who complete their hiring process. They're OK with missing out on some people who would be excellent but who can't/won't make it through their hiring process.
The mistake may lie in copying Google's hiring process.
None of this makes any sense. Why should I complete a tech test interview if I have 15 years of experience at X top firm? I would have done it already anyway.
I had a ‘principal engineer’ at last place who grinded leetcode for 100 days and still failed a leetcode interview. It’s utter nonsense.
A conversation with technical questions and topics should suffice. Hire fast and fire people.
LLMs killed busy work. Now people have to actually talk to each other and they're finding out that we've been imitating functionality instead being functional.
What a BS article. As they say, just do the interview in person. Problem solved. Not sure about the US but 99% of jobs here in Spain are hybrid or onsite ("presencial"), not fully remote.
They're acting like all jobs are remote and it's impossible to do an interview in person.
Also, does it really matter? If a person is good at using AI and manages to be good at creating code with that, is it really so much worse than a person that does it from the top of their head? I think we have to drop the idea that AI is going to go away. I know it's all overhyped right now but there is definitely something to it. I think it will be another tool in our toolboxes. Just like stackoverflow has been for ages (and that didn't kill interviews either).
Show the remote candidate an AI's deficient answer to a well-asked question, and ask the candidate if they understand what exactly is wrong with the AI's assessment, or what the follow-up/rewritten prompt to the AI should be. Compile a library of such deficient chats with the AI.
It's a tricky subject, because what if people who use AI are just better together? And what if in a year from now, AI by itself is better? What's the point of hiring anyone? Perhaps this is the issue behind the problems being described, which might be mere symptoms. There are tons of very smart teams working on software that will basically replace the people you're hiring.
Or you can just given them a way to bypass all of that, and ask them about any significant project that the candidate did build (which is relevant to the job description, open or closed source that is released) or even open source contributions towards widely used and significant projects. (Not hello world, or demo projects, or README changes.)
Both scenarios are easily verifiable (can check that you released the project or if you made that commit or not) and in the case of open-source, the interviewer can lookup at how you code-review with others, and how you respond and reason about the code review comments of others all in public to see if you actually understand the patches you or another person submitted.
A conversation can be started around it and eliminates 95% of frauds. If the candidate cannot answer this, then no choice but give a leetcode / hackerrank hard challenge and interview them again to explain their solution and why.
A net positive to everyone and all it takes to qualify is to build something that you can point to or contribute to a significant open source project. Unlike Hackerrank which has now become a negative sum race to the bottom quest with rampant cheating thanks to LLMs.
After that, a simple whiteboard challenge and that is it.
I cannot emphasize this enough. Coding is the EASY part of writing software. You can teach someone to code in a couple of months. Interviews that focus on someone's ability to code are just dumb.
What you need to do is see how well they can design before writing software. What is their process for designing the software they make? Can they architect it correctly? How do they capture user's mental models? How do they deal with the many "tops" that software has?
No it didn't, you just need to stop asking questions an LLM can easily solve, most of those were probably terrible questions to begin with.
I can create a simple project with 20 files, where you would need to check almost all of them to understand the problem you need to solve, good luck feeding that into an LLM.
Maybe you have some sneaky script or IDE integration that does this for you, fine, I'll just generate a class with 200 useless fields to exhaust your LLM's context length.
Or I can just share my screen and ask you to help me debug an issue.
I know nobody likes doing tech interviews but how has AI killed it ? Anyways you do want to know basics of computer science, it is a helpful thing to know if you ever want to progress beyond CRUD shitshovelling.
Also wtf is inverting a binary tree ? Like doing a "bottom-view". That shit is easy.
AI killed the tech interview. Now what?
(kanenarraway.com)260 points by ghuntley 19 February 2025 | 605 comments
Comments
The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"
Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.
A junior, mid, senior, staff are going to see very different things in the same codebase.
Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.
I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)
There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.
For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.
It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.
I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).
The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.
The offered me the job. I went elsewhere.
this is a crazy take in the context of coding interviews. first, because it's quite obvious if someone is blindly copy and pasting from cursor, for example, and figuring out what to do is a significant portion of the battle, if you can get cursor to solve a complex problem, elegantly, and in one try, the likelihood that you're actually a good engineer is quite high.
if you're solving a tightly scoped and precise problem, like most coding interviews, the challenge largely lies in identifying the right solution and debugging when it's not right. if you're conducting an interview, you're also likely asking someone to walk through their solution, so it's obvious if they don't understand what they're doing.
cursor and copilot don't solve for that, they make it much easier to write code quickly, once you know what you're doing.
Works really well and it mimics the what we find is the most important bit about coding.
I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.
Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.
I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.
I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?
0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).
I have 26 years of solid experience, been writing code since I was 8.
There should be a ton of companies out there just dying to hire someone with that kind of experience.
But I'm not perfect, no one is; and faking doesn't work very well for me.
Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.
[1] https://en.wikipedia.org/wiki/Psychometrics
I do rely on HR having, hopefully, done their job and validated the work history.
I do have one technical question that started out as fun and quirky but has actually shown more value than expected. I call it the desert island cli.
What are your 5 linux cli desert island commands?
Having a hardware background, today, mine are: vi, lsof, netcat, glances, and I am blanking on a fifth. We have been doing a lot of terraform lately
I have had several interesting responses
Manager level candidate with 15+ years hands on experience. He thought it was a dumb question because it would never happen. He became the teams manager a few months after hiring. He was a great manager and we are friends.
Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job.
Senior level SRE hire with a software background. He only needed two emacs and a compiler, he could write anything else he needed.
I fail to see why this wouldn't be the obvious choice. Do we disallow linters or static analysis on interviews? This is a tool and checking for skill and good practices using it makes all sense.
Everyone is so terrified of hiring someone that can’t code, but the most likely bad hires and the most damaging bad hires are bad because of things that have nothing to do with raw coding ability.
*Except the performing arts. The way we interview is pretty close to the way musicians are interviewed, but that’s also really similar to their actual job.
Second-rate companies will keep some superficial coding, but will start to emphasize more of the verbal parts like system design and retrospective. Which sucks, because those are totally subjective and mostly filters for whoever can BS better on the spot and/or cater to the interviewer's mood and biases better.
My favorite still: in-person pair programming for a realistic problem (could be made-up or shortened, but similar to the real ones on the job). Use whatever tools you want, but get the correct requirements and then explain what you just did, and why.
A shorter/easier task is to code review/critique a chunk of code, could even just print it out if in person.
Not looking forward to it.
I realize it's not easy for smaller companies to do, but I think it's the single best way to see if someone's fit for a job
We get to touch on client and browser issues, graphQL, Postgres, Node, Typescript (and even the various libraries used). It includes basic CRUD functionality, a third party API integration, basic security concerns and more. It's just meant to gauge a minimal level of fluency for people that will be in hands on keyboard roles (juniors up to leads, basically). I don't think anyone has found a way to use AI to help them (yet) but if this is too much for them they will quickly fall flat in the day to day job.
Where we HAVE encountered AI is in the question/answer portion of the process. So far many of those have been painfully obvious but I'm sure others were cagier about it. The one incident that we have had that kind of shook us was when someone used a stand-in to do the screen (he was fantastic, lol) and then when we hired him it took us about a week to realize that this was a team of people using an AI avatar that looked very much like the person we interviewed. They claimed to be in California but were actually in India and were streaming video from a Windows machine to the Mac we had supplied for Teams meetings. In one meeting (as red flags were accumulating) their Windows machine crashed and the outline of the person in Teams was replaced by the old school blue screen of death.
I don't see AI as a serious threat to the interview process unless your interview process looks a lot like hackerrank.
I think the same applies to good tech interview. Company should adapt hiring process to friend with AI, not fight.
Maybe the future will be human shills pretending to be job candidates for shady AI “employment agencies” that are actually just (literally) skinning gpt6 apis that sockpuppet minimum wage developing nation”hosts”?
Show code, ask questions about it that requires opinion.
Collaborate, as opposed to just do.
Things that really tell me if I can work with that person and if together, we can make good things.
Unless the job you're interviewing for is remote-only, this makes perfect sense. If you expect your candidates to be able to work in your office, they should be interviewed there.
"Coding", as in writing stuff in programming languages with correct syntax that does the thing asked for in isolation, has always been a very dumb skill to test for. Even before we had stackoverflow syntactic issues were something you could get through by consulting a reference book or doing some trial and error with a repl or a compiler. That this is faster now with internet search and LLMs is good for everyone involved, but the fact that it's not what matters remains
The important part of every job that gets a computer to do a thing is a combination of two capabilities: Problem-solving, that is, understanding the intended outcome and having intuition about how to get there through whatever tools are available, and frustration tolerance: The ability to keep trying new* stuff until you get there
Businesses can then optimize for things like efficiency or working well with others once those constraints are met, but without those capabilities you simply can't do the job, so they're paramount. The problem with most dinky little coding interviews wasn't that you could "cheat", it's thst they basically never tested for those constraints by design, though some clever hiring people manage to tweak them to do so on an ad hoc basis sometimes
* important because a common frustration failure mode is repetitive behavior. Try something. Don't understand why it doesn't work. Get more frustrated. Try the same thing again. Repeat
Tech interviewing has become a weird survival game, and now AI is flipping the rules again. If you need a laugh: https://codingfornothing.com
As an interviewee it's insane to me how many jobs I have not gotten because of some arbitrary coding problem. I can confidently say after having worked in this field for over a decade and at a FAANG that I am a very capable programmer. I am considered one of the best on every team I've been on. So they are definitely selecting the wrong people IMO.
* Take a candidate's track record into account. Talk with them about it.
* Show that you're experienced yourself, by being able to tell something about what someone would be like to work with, by talking with them.
* Get a reputation for your company not tolerating dishonesty. If someone cheats in an interview and gets caught, they're banned there, all the interviewers will know, and the cheater might also start to get a reputation beyond that company. (Bonus: Company reputation for valuing honesty is attractive to people who don't want dishonest coworkers.)
* Treat people like a colleague, trying to assess whether it's a good match. You're not going to be perfectly aligned (e.g., the candidate or the company/role might be a bit out of the other's league right now), but to some degree you both want it to be a good match for both parties. Work as far as you can with that.
(Don't do this: Leetcode hazing, to establish the dynamic of them being there to dance for your approval, so hopefully they'll be negged, and will seek your approval, won't think critically about how competent and viable your self/team/company are, and will also be less likely to get uppity when you make a lowball offer. Which incidentally places the burden of rehearsing for Leetcode ritual performances upon the entire field, at huge cost.)
In parallel, we asked interviewers to use one of 3 question types: verbatim LeetCode questions, slightly modified LeetCode questions, and completely custom questions.
The full writeup is here: https://interviewing.io/blog/how-hard-is-it-to-cheat-with-ch...
TL;DR:
- Interviewers couldn't tell when candidates were cheating at all
- Both verbatim and slightly modified LeetCode questions were really easy to game with AI
- Custom questions were not gamable, on the other hand[1]
So, at least for now, my advice is that companies put more effort into coming up with questions that are unique to them. It's better for candidates because they get better signal about the work, it reduces the value asymmetry (companies have to put effort into their process instead of just grabbing questions from LeetCode etc), and it's better for employers (higher signal from the interview).
[1] This may change with the advent of better models
AI might make e.g. your leetcode interview less predictive than it previously would have been. But was it predictive in the first place? I don't think most interviews are written by people thinking in those terms at all. If your method of interviewing never depended on data suggesting it actually, you know, worked in the first place, why would it matter if it starts working even worse?
Insofar as it makes the shittiness of those interviews more visible, the effect of AI is a good thing. An interview focused on recall of some specific algorithm was never predictive, it's just now predictive in a way that Generic Business Idiots can understand.
We frequently interview people who both (a) claim to have been in senior IC roles (not architect positions, roles where they are theoretically coding a lot) for many, many years and (b) cannot code their way out of a paper bag when presented with a problem that requires even a modicum of original reasoning. Some of that might be interview nerves, of course, but a lot of these people are not at all unconfident. They just...suck. And I wonder if what we're seeing is the downstream effects of Generic Business Idiots hiring primarily people who memorize stuff than people who build stuff.
The article addresses this:
>A lot of companies are doing RTO, but even companies that are 100% in-office still interview candidates from other cities. Spending money to fly every candidate out without an aggressive pre-screen is too wasteful.
No, accidently hiring someone who AI'd their way through the interview costs orders of magnitude more to undo. It's absolutely worth paying for a round trip flight and a couple days of accommodations.
It also feels like interviewers know this and assume you studied the questions, they seem incapable of giving hints, etc if you don't have the questions memorized.
AI is the least of it.
I've built a search engine for two countries and then I was failed by a guy that wears cowboy hats to work at google in Ireland. Not a lot of cows there I'm guessing. (No offence to any real cowboys that work at google of course).
I did like the free flight to Ireland though and the nice lunch. Though I was disappointed I lost "Do no evil" company booklet.
Of course the problem is this can't scale or be outsourced to HR, but is this a bug or a feature?
That's the only way you're going to get relevant information.
I've been verified to the moon and back by Apple and others for roles that could never have worked.
The problem is that when it comes to the hiring process, everyone is suddenly an expert; no matter how dysfunctional, inhumane and destructive their ideas are.
isnt it that simple?
This blocks their screen too.
and yes we do know very soon if you look somewere else, take time or rephrase the question to get more time.
If you able to fake it, at that point you should just get th ejob anyway :P
> Architectural interviews are likely safe for a few years yet. From talking to people who have run these, it’s evident that someone is using AI. They often stop with long pauses, do not quite explain things succinctly, and do not understand the questions well enough to prompt the correct answer. As AI gets better (and faster), this will likely follow the same fate as the rest but I would give it some years yet.
Completely matches my experience. I don't do leet code BS, just "let's have a talk". I ask you questions about things you tell me you know about, and things I expect of someone at the level you're selling yourself at. The longest it's taken me to detect one of these scumbags was 15 minutes, and an extra 5 minutes to make sure.
Some of them make mistakes that are beyond stupid, like identity theft of someone who was born, raised and graduated in a country whose main language they cannot speak.
The smartest ones either do not know when to stop answering your questions with perfect answers (they just do not know what they're supposed to not know), or fumble their delivery and end up looking like unauthentic puppets. You just keep grinding them until you catch em.
I'm sure it's not infallible, but that's inherent to hiring. The only problem with this is cost, you're going to need a senior+ dev running the interview, and IME most are not happy to do so. But this might just be what the price of admission for running a hiring pipeline for software devs is nowadays. Heck, now feels like a good time to start a recruitment process outsourcing biz focused on the software industry.
This sort of comp-sci style exam with quizzes and what not maybe somewhat helps when hiring junior with zero experience fresh out of school.
But why are people with 20+ years of easily verifiable experience (picking up a phone and asking for references is still a thing!) being asked to invert trees and implement stuff like quicksort or some contrived BS assignment the interviewer uses to boost their own ego but with zero relevance to the day to day job they will be doing?
Why are we still wasting time with this? Why is always the default the assumption there that the applicants are all crooked hochstaplers that are lying on their resumes?
99% of jobs come with probationary period anyway where the person can be fired on the spot without justification or any strings attached. That should be more than enough time to see whether the person knows their stuff or not after having passed one or two rounds of oral interviews.
It is good enough for literally every other job - except for software engineering. What makes us the special snowflakes that people are being asked to put up with this crap?
True.
> One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go.
Oh.... yeah, that sounds just... great.
If that's where the things are going, I'm retraining to become a line cook at McDonalds.
The image below does sum it up but not in the way the author thinks.
Google wants to hire people who complete their hiring process. They're OK with missing out on some people who would be excellent but who can't/won't make it through their hiring process.
The mistake may lie in copying Google's hiring process.
I had a ‘principal engineer’ at last place who grinded leetcode for 100 days and still failed a leetcode interview. It’s utter nonsense.
A conversation with technical questions and topics should suffice. Hire fast and fire people.
They're acting like all jobs are remote and it's impossible to do an interview in person.
Also, does it really matter? If a person is good at using AI and manages to be good at creating code with that, is it really so much worse than a person that does it from the top of their head? I think we have to drop the idea that AI is going to go away. I know it's all overhyped right now but there is definitely something to it. I think it will be another tool in our toolboxes. Just like stackoverflow has been for ages (and that didn't kill interviews either).
Both scenarios are easily verifiable (can check that you released the project or if you made that commit or not) and in the case of open-source, the interviewer can lookup at how you code-review with others, and how you respond and reason about the code review comments of others all in public to see if you actually understand the patches you or another person submitted.
A conversation can be started around it and eliminates 95% of frauds. If the candidate cannot answer this, then no choice but give a leetcode / hackerrank hard challenge and interview them again to explain their solution and why.
A net positive to everyone and all it takes to qualify is to build something that you can point to or contribute to a significant open source project. Unlike Hackerrank which has now become a negative sum race to the bottom quest with rampant cheating thanks to LLMs.
After that, a simple whiteboard challenge and that is it.
What you need to do is see how well they can design before writing software. What is their process for designing the software they make? Can they architect it correctly? How do they capture user's mental models? How do they deal with the many "tops" that software has?
I can create a simple project with 20 files, where you would need to check almost all of them to understand the problem you need to solve, good luck feeding that into an LLM.
Maybe you have some sneaky script or IDE integration that does this for you, fine, I'll just generate a class with 200 useless fields to exhaust your LLM's context length.
Or I can just share my screen and ask you to help me debug an issue.
Also wtf is inverting a binary tree ? Like doing a "bottom-view". That shit is easy.