Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When designing an ISA, I assume a basic step is figuring out which instructions to include.

For each instruction, I would guess that a 'draft' compiler is produced that can use that instruction in its code generation steps, and a few 'draft' CPU designs are made which include support for that instruction.

Then cycle accurate simulations can be done on a set of test benches to see how the addition of that instruction affects performance, power, and code size across a wide array of different usecases.

In the case of RISC-V, where some CPU extensions might be emulated, I would expect the test to also cover the performance hit of emulation of the extension for those machines without native support.

If all of that was done, and still it made sense to add the instruction, then most of these critiques aren't valid - since there will be hard data that the approach taken was the best one.

Perhaps when RISCV was a young project, too many design decisions were made without the massive compute farm to do all these simulations, or before more complex multi-issue CPU designs were added to it, and therefore some decisions aren't optimal?



Academia pulled in too much.

Small ISA != Small transistor count.

People will inevitably try to throw more transistors on the ISA limitations.


This isn't true. Berkeley produced several real RISC-V ASICs and adjusted the encoding according to their findings before RISC-V got any wider attention. Also the team had a long history making real chips before RISC-V, dating back to the early 80s.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: