Today we're launching our newest candidate quality-of-life feature on Ropes - IDE themes. There's a universal dislike for the rigid IDEs that assessment co's force on candidates. These code editors are slow, clunky, and missing the styes/shortcuts that developers enjoy. Developers are unique, and we work in unique ways. Ropes offers two new options to let candidates keep their style: (1) First, many employers on Ropes allow candidates to work in their own IDEs entirely. We've developed a way to gather insights like you were in the room, while allowing developers to use their daily environments. (2) For cases that need a browser-based IDE - we're giving the ability for candidates to choose their style/editor setup before the assessment starts - so they're comfortable when it begins. --- My bet is that if we build a great experience for candidates, we'll be building a great platform for evaluators, too. I hope you'll follow along as we pursue that mission - and we need help! We're hiring founding engineers in NYC - if you're interested, I'd love to talk to you.
Ken Schumacher’s Post
More Relevant Posts
-
Found this visualization interesting from my old friends at Retool's State of AI report. <50% of engineering teams and only 1 in 10 HR teams are currently using AI! At Ropes a big portion of our engineering work goes to ensuring AI outputs are trustable + correct. It's relatively easy to slap AI on a product - much harder to do when needing to guarantee correctness. As founders and builders, we have a long way to go to making these tools ubiquitous. But it's worth the effort - the payoffs of these tools are enormous (especially for early-stage teams). Linking the full Retool report below! It's a good high-level look at real AI adoption today across B2B if you're curious.
To view or add a comment, sign in
-
-
Debuting a new candidate experience landing page on our site today! Most know Ropes gets the same signal as live interviews - less realize that candidates prefer it, when given the choice.
Ropes gives candidates the freedom to think, instead of being hawked by an interviewer. See it for yourself: https://lnkd.in/evPGQJDU
To view or add a comment, sign in
-
-
There are great candidates sitting in your ATS today. But you need to give them the rope to be able to prove it. In an employers market - top startups today are being flooded with applications. There's a number of A-grade players in that pool - but you can't identify them in a resume screen. Anyone can write skills on a resume, and with the impact of ChatGPT, all applications are starting to look the same. The top-of-funnel technical screen should be the candidate's opportunity to prove their value and jump into your onsite funnel. But that's not how it works today. Most assignments are pass/fail, so great talent blends in with the more general "pass" bucket. By the time you get around to actually considering their application - they've already signed elsewhere. We've created dynamic problems in Ropes for these talented candidates. Since Ropes' problems get progressively harder - the top possible score is not a "pass", or an 8/8 - it's unbounded. If you want to see it live - just send me a note!
To view or add a comment, sign in
-
-
Coding assessments are a two-way street - your candidates are vetting you, the employer. Is this somewhere they want to work? Candidates have very limited signal before entering your process. Would I enjoy working here? Would I be adequately challenged? Am I going to learn from the day-to-day of my work? Candidates start trying to answer these questions the minute they start your interview process. In light of that - it's amazing how many companies use out-of-box, textbook-style challenges and scare off candidates. These judgments aren't equally distributed, either. It's the best candidates with optionality that will drop off first. At Ropes, we use LLMs to generate unique case-study style assessments for your candidates. Startups have more pressing priorities than creating the perfect candidate challenge - that's why they rely on Ropes to have a best-in-class process without investing the time to craft it manually. If you're using a library take-home assignment or test in your recruiting process - we'd love to help! Just shoot me a message to get started.
To view or add a comment, sign in
-
-
Should candidates be able to use AI in take-home assessments? I’m now seeing this come up nearly every day, and it’s not hard to understand why - candidates using AI tools on legacy coding tests have a huge leg up. When an early adopter migrated their take-home test to Ropes, we worked with them to first measure the completion times of candidates using AI tools (e.g. ChatGPT) vs candidates that don’t. We found candidates using AI completed the same test >3x faster than candidates who weren’t! We know this is unfair / a poor candidate experience- so what do we do about it? My opinion: you can’t outrun AI. Even if GPT-4 can’t make meaningful progress towards solving your current problem, GPT-5 will. Employers need to shift from testing if candidates can solve assessments, to testing HOW candidates solve problems. How does your candidate react when their code doesn’t compile? How do they first break down an ambiguous problem into one that’s solvable? Once you understand the “how”, now you can welcome (and even encourage!) candidates’ usage of AI in your assessment. The above customer changed their problem to explicitly encourage AI use and explicitly looked for where/when candidates decided to use LLMs. They got a higher quality set of hires - and their candidates got a better experience, too. Sorting arrays and writing simple code statements is already solved by today’s AI models - it shouldn’t be what you base your hiring decisions around. The “how” of a developer has never been important in the world of AI - and that’s what we focus on measuring for you at Ropes.
To view or add a comment, sign in
-
-
We're mid-flight home from our first UNLEASH event - Ropes had a blast. A few common themes we heard at our booth: 1) TA leaders are looking for real AI applications Almost every vendor has slapped "AI" into their existing product, but relatively few have been able to innovate beyond chatbots, search filters, etc. Those that have innovated were the talk-of-the-town at Unleash this year. As an industry, there's still so much untapped potential in taking advantage of the recent LLM innovations. 2) Protecting the integrity of assessments is more important than ever before. Today's candidates are savvy enough to find answers to your textbook-style coding questions online, or will plug them straight into ChatGPT. Employers see two options: either write custom questions manually (or quickly on Ropes) so candidates can't easily cheat, or allow AI tools in your assessments. It was great validation to see that the time we've invested in anti-cheating matters was time well spent. Happy to demo these to anyone curious! 3) Candidate experience still matters Despite an employer-friendly job market, TA teams are still focused on providing a great candidate experience. The days of multi-hour assessments seem nearly over - employers are looking for ways to get the same signal out of short, intentional assessments. --- On a personal note - feeling incredibly pumped from this week. There are few parts of this job more enjoyable than seeing the awe of a new prospect mid-demo, or reconnecting with one of our early adopters on the show floor. There are great people in this space, and it was great making so many new friends. Back to the office tomorrow - looking forward to the next show! -ken
To view or add a comment, sign in
-
-
Going to #UnleashAmerica next week? Let's chat tech hiring. https://ropes.ai/unleash
Ropes is excited to exhibit at #UnleashAmerica next week in Vegas! We'll be running personalized demos all week - come check it out. Book time to meet the Ropes team and our founder Ken Schumacher next week! https://ropes.ai/unleash #TalentAcquisiton #CodingAssessments #TechHiring
To view or add a comment, sign in
-