>The main advantage of microcode is that it turns design of control circuitry into a programming task instead of a difficult logic design task.
But then you need circuitry to process the microcode. Why is that desirable?
Is microcode basically an abstraction layer to allow a larger machine code instruction set be supported on a smaller native instruction set - and if it is why is that better than just using the smaller instruction set directly?
Yes, there is less you have to implement directly in logic. A complicated instruction can be implemented as multiple microcode instructions even including branches and loops of micro-ops. And RISC basically came about by observing exactly what you have, that cutting out the middle man actually worked better.
But CISC complexity and microcode was significantly driven by the lack of good optimizing compilers. A lot of performance critical code had to be written in assembly, and writing assembly was difficult and time consuming so it was nice to have more expressive instructions.
Early CPUs also had little or no cache and instruction fetch bandwidth was a very significant bottleneck. This was another significant driver for CISC.
So it wasn't just the case that CPU designers were idiots from the start, they did have reasonable reasons for the choices they made at the time. RISC required a certain confluence of hardware and software advancement to happen before it became the obvious or better alternative.
Interestingly things swung back the other way a decade or so later, as CPUs got vastly more complicated and capable, the ISA became relatively less important and CISCs were able to mostly catch back up to RISCs.
Good answer. I'll also point out that RAM was expensive in the olden days, so you wanted your instructions to be as dense as possible. It made sense to have one instruction do as much as possible. It also made sense to have instruction lengths ranging from 1 byte to 4 bytes or more like the 8086, even though that made decoding complicated.
Also, for the original question, you really don't want to program in microcode directly, as it's kind of a mess. Micro-instructions expose a lot of ugly hardware details. Moreover, it locks you into a fixed architecture and you can't upgrade, because the microcode will change.
But then you need circuitry to process the microcode. Why is that desirable?
Is microcode basically an abstraction layer to allow a larger machine code instruction set be supported on a smaller native instruction set - and if it is why is that better than just using the smaller instruction set directly?