I don't think AI code is a serious game-changer for hiring or education in the short term, even giving it the benefit of the doubt in terms of its power, which is still limited. You mention the problem of students plagiarizing solutions available online, but hired humans are likely just as big of a contributor to cheating, and similar to AI in that the cheater need only state the specification to get the code.
Sure, AI assistants are more accessible than tracking down (and potentially paying) a human to complete an assignment, but at the end of the day the outcome is more or less the same, assuming the cheating isn't caught: a student gets a good grade on an assignment, course or exam that they didn't deserve, or a prospective employee gets hired for a job they're unqualified for.
The problem in both student and employee cases is that they've created an unsustainable situation in their immediate future. For the student, later assignments/exams/courses are going to get harder, and if they've never built a solid conceptual foundation, then they'll either have to come clean and work extra hard to fill in knowledge gaps, or they'll have to resort to further cheating to keep up appearances. Same for the employee: they're hired, but it'll likely be a huge struggle if they're unqualified, with disastrous consequences for them and the employer. And these are the "lucky" ones who weren't caught immediately.
The techniques for catching cheaters don't seem significantly different between AI and existing cheating approaches. For example, you can ask students/candidates to explain their code verbally (written responses are easily hired out to a human) and/or watch the solution coded live. Of course, verbal isn't a perfect foil or terribly scalable (I've participated in MOOC classes of over 500 students to a teaching staff of about a dozen), complicated by remote hiring/learning.
Additionally, it's fairly easy to programmatically compare submitted code against the output of the most popular AI tools using Moss. This suggests that human-hired plagiarism might be harder to detect than AI because it's more likely to be unique, unless the hired coder didn't bother to try to make it look unplagiarized or used an AI solution on behalf of the student/candidate.
If, someday, real-world programming tasks can be completed purely with AI assistance and no understanding is needed, there'll likely be new technical problems. Exams and assignments will be less about whether or not a solution can be coded and understood line-by-line and more about whether the right problem is being solved, how the AI tool was guided, fine-tuning parameters, system-level design, and so forth. A lot of these new requirements smell a lot like much high-level data science, deep learning and ML. Higher levels of abstraction entail different problems rather than an elimination of problems altogether. The trouble is that high-level tools make it easy to solve low-level tasks, which is nothing new (for example, godbolt for completing assembly assignments).
A lot of this makes me think of doping in biking, using data science to optimize shot locations in basketball, Stockfish in chess, auto-tune in music, etc. If everyone is cheating, is anyone cheating, or is that just the new rules of the game? Where do we draw the line between using an available powerful tool effectively versus a cheat? Technological innovation can create this turmoil, but unlike bike doping, it's still hard to cheat a whole CS degree, bootcamp or job with AI code. And if it gets to the point where you can, that'll be the time to reassess the game (or find stricter ways to cling to and enforce the old rules).