However magical the hyperspatial and statistical characteristics of LLMs are, it helps to remember that LLMs are potential amplifiers and not necessarily replacements/automators. Can they generate code that works? Yes. Do they always? No. Not even with orchestrator iterations.
We have a romantic idea that LLMs and their marvelous underpinnings are sentient... we want to believe they can reason like we humans do. This belief sends us down a path of use that bypasses the real benefits that have to do with search and language translation that surpass previous methods.
We would love to say, "Please generate and compile code that does X, push it to this git repository, and let the CICD pipeline push from there into production".
Why do we want this? One reason is that we don't naturally want to build out an interactive process where humans engage to QA and iterate, or even to correct. We want perfection where even those creating and providing access to LLMs say that LLMs can generate things that are wrong (implying use should not assume accuracy/correctness).
I believe that LLMs can, in fact, make a person an 11Xer... 10Xers check their work and are efficient, 11Xers use accelerators but still check their work... ;-)
Claude et al will get to the point in their evolution where, when given access to a compiler, they will check their work, potentially including the generation of tests that make clear -- at least -- where the LLMs think there might be bugs. BUT does this mean code will be bug free, or that generated code does what was asked (assuming perfect prompt accuracy and completeness) with 100% fidelity? No. Statistics can be wrong (the wrong dots can be connected). The training corpus is imperfect, thus whatever is derived from it also is imperfect. AND, while no guarantee, a perfect prompt, whatever this means, might only help... not guarantee.
We must think differently about these emerging tools, else we will most assuredly use them incorrectly.
LLMs more aptly represent Paul Bunyan stories than dystopian futures where humans can depend on sentient technological beings that do the bidding of their human masters (or that replace them entirely).