Edit: thanks for the replies. Searchingnfurther, this 15 min video is quite well made and told me more than I need to know (for now)
https://www.youtube.com/watch?v=Ps0JFsyX2fU
RISC-V (pronounced risk five), is a Free open-source Instruction Set Architecture (ISA). Other well established ISA like x86, amd64 (Intel and AMD) and ARM, are proprietary and therefore, one must pay every expensive licenses to design and build a processor using these architectures. You don’t need to pay a license to build a RISC-V processor, you only need to follow the specifications. That doesn’t mean the CPU design is also free, no, they stay very much the closed property of the designer, but RISC-V represents non the less, a very big step towards more transparency and technology freedom.
I know there are already a number of extensions specified in the specifications, such that Risc-V could be relevant to design the simplest of microcontroller up to the most powerful super computer. I suppose it is possible and allowed to design a CPU with proprietary extensions. What should prevent an ARM type of situation is the fact that so many use-cases are already covered by the open specifications. What is not there yet, to my knowledge, are things like graphics, video, neural-net acceleration.
All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren’t included, video I guess you’d want some special APU sauce for wavelets or whatever (don’t know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you’d want reduced precision but that’s in the pipeline.
Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it’s probably not going to perform at an optimal level.
VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it’ll ever run is hand-written assembly, anyway. Can’t find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it’s also hopeless to use for anything that can’t be streamed linearly from and to memory.
Graphics is the by far most interesting one in my view. That is, it’s a lot general purpose stuff (for GPGPU values of “general purpose”) with only a couple of bits and pieces domain-specific.
The instruction set is a tiny part of the overall CPU architecture. You don’t need to lock it as everything else is proprietary: manufacturing, cores, electric design, etc. Most RISC-V processors today use ARM cores and are subject to ARM licensing.
RISC-V is like LEGO, where you can put together pieces to make whatever you want. Nobody can tell you what you can or can’t make, you can be as creative as you want. Oh, and there’s motors and stuff too.
ARM is like Hotwheels, there are lots of cars, but you can’t make your own. You can get a bit creative making tracks, but that’s about it.
AMD and Intel are like RC cars, they’re really fun, but they use a lot of batteries and you can’t really customize them. Oh, and they’re expensive, so you only get one.
Each is cool, but with LEGO, you can do everything the others do, and more. Like LEGO, RISC-V can be slow to work with, especially if you don’t have the pieces you want, but the more people that use it, the better it’ll get and the more pieces you can get. And if you have a 3D printer, you can make your own pieces and share them with others.
“you” as in person with required skills, resources and access to a chip fabrication facility. For many others they can just buy something designed and produced by others, or play around a bit on FPGAs.
We will also see how much variation with RISC-V will actually happen, because if every processor is a unique piece of engineering, it is really hard to write software, that works on every one.
Even with ARM there are arguable too many designs out there, which currently take a lot of effort to integrate.
Sure, and there are more people with that access than just AMD, ARM, NVIDIA, and Intel.
If game devs supported RISC-V, Valve could’ve made the Steam Deck without having to get AMD’s help, which means they would’ve had more options to keep prices down while meeting their performance goals. Likewise for server vendors, phone manufacturers, etc, who currently need to buy from ARM (and fab themselves) or AMD/Intel.
And that’s why I mentioned 3D printing. Making custom 3D models of LEGO pieces is out of reach for many (most?) and even owning a 3D printer is out of reach for many. I have one, but I’ve only built a handful of things because it’s time consuming.
As it gets more software support, we should see a lot more variety in RISC-V chips. We’re not there yet, but we should be excited because it’s starting to get traction, and the future looks bright.
It also means that anyone can make their own instruction set extensions or just some custom modifications, which would make software much more difficult to port. You would have to patch your compiler for every individual chip, if you even figure out what those instructions are, and what they do. Backwards, forwards or sideway (to other cpus from other vendors) compatibility takes effort, and not everyone will try to have that, and instead add their own individual secret sauce to their instruction set.
IMO, I am excited about RISC-V, but if the license doesn’t force adopters to open their designs under an open source license as well, I do expect even more portability issues as we already have with ARM socs.
Compilers basically already do that, and distributed executables usually assume minimal instruction support. Compilers can detect what’s supported, so it’s largely a solved problem, at least if you compile things yourself.
ARM is like Hotwheels, there are lots of cars, but you can’t make your own.
That’s not entirely true. There are companies that have the ARM achitecture license, like Apple or Cavium (now bought by Marvell). They are allowed to make their own hotwheels using the spring system or the wheels or whatever.
Not an eli5 because I’m still not caught up on it but if my memory serves, RISC-V is an open source architecture for processors, basically like amd64 or arm64, actually I’m pretty sure ARM’s chips are RISC derivatives.
Edit: correcting my comment, ARM makes RISC chips, not RISC-V
ARM and RISC-V are entirely different in that neither one is based on the other, but what they have in common is that they’re both RISC (Reduced Instruction Set Computing) architectures. RISC is what makes ARM CPUs (in your phone, etc) so efficient and hopefully RISC-V will get there too.
x86 by comparison is Complex Instruction Set Computing, which allows for more performance in some cases, but isn’t as efficient.
The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn’t really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it’s not due to RISC vs CISC.
As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.
In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can’t or don’t. CISC-V would be just as exciting.
compressed instruction set /= variable-width. x86 instructions are anything from one to a gazillion bytes, while RISC-V is four bytes or optionally (very commonly supported) two bytes. Much easier to handle.
vector instructions,
RISC-V is (as far as I’m aware) the first ISA since Cray to use vector instructions. Certainly the only one that actually made a splash. SIMD isn’t vector instructions, most crucially with vector insns the ISA doesn’t care about vector length on an opcode level. That’s like if you wrote MMX code back in the days and if you run the same code now on a modern CPU it’s using just as wide registers as SSE3.
But you’re right the old definitions are a bit wonky nowadays, I’d say the main differentiating factor nowadays is having a load/store architecture and disciplined instruction widths. Modern out-of-order CPUs with half a gazillion instructions of a single thread in flight at any time of course don’t really care about the load/store thing but both things simplify insn decoding to ludicrous degrees, saving die space and heat. For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).
Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley. RISC I and II were the originals, II is what all the other RISC architectures were inspired by, III was a Smalltalk machine, IV Lisp. Then a long time nothing, then lecturers noticed that teaching modern microarches with old or ad-hoc insn sets is not a good idea, x86 is out of the question because full of hysterical raisins, ARM is actually quite clean but ARM demands a lot, and I mean a lot of money for the right to implement their ISA in custom silicon, so they started rolling their own in 2010. Calling it RISC V was a no-brainer.
Oh for sure, but before the days of super-scalars I don’t think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.
For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).
If you can simplify the instruction decoding that’s always a benefit - moreso the more cores you have.
Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley.
You’ll get no disagreement from me on that. Maybe you misunderstood what I meant by “CISC-V would be just as exciting”? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.
If you still have commenting motivation, what are the top 5 differences between x86 and ARM?
Up until your post I had thought it exactly was the size of the instruction set with x86 having lots of very specific multi-step-in-a-single instruction as well as crufty instruction for backwards compatibility (like MPSADBW).
I’m more familiar with RISC-V than I am with ARM though it’s my understanding they’re quite similar.
ARM/RISC-V are load-store architectures, meaning they divide instructions between loading/storing and doing computation. x86 on the other hand is a register-memory architecture, having instructions that do both computation as well as loading/storing.
ARM/RISC-V also have weaker guarantees as to memory ordering allowing for less synchronization between cores, however RISC-V has an extension to enforce the same guarantees as x86 and Apple’s M-series CPU have a similar extension for ARM. If you want to emulate x86 applications on ARM/RISC-V these kinds of extensions are essential for performance.
ARM/RISC-V instructions are variable width but only in a limited sense. They have “compressed instructions” - 2 bytes instead of 4 - to increase instruction density in order to compete with x86’s true variable width instructions. They’re fairly close in instruction density, though compressed instructions are annoying for compilers to handle due to instruction alignment. 4 byte instructions must be aligned to 4 bytes, so if you have 3 instructions A, B and C but only B has a compressed version then you can’t actually use it because there must be 4 bytes between instructions A and C.
ARM/RISC-V also makes backwards compatibility entirely optional, Apple’s M-series don’t implement 32-bit mode for instance, whereas x86-64 still has “real mode” for running 16 bit operating systems.
There’s also a number of other differences, like the number of registers, page table formats, operating modes, etc, but those are the more fundamental ones I can think of.
Up until your post I had thought it exactly was the size of the instruction set with x86 having lots of very specific multi-step-in-a-single instruction as well as crufty instruction for backwards compatibility (like MPSADBW).
The MPSADBW thing likely comes from the hackaday article on why “x86 needs to die”. The kinda funny thing about that is MPSADBW is actually a really important instruction for (apparently) video decoding; ARM even has a similar instruction called SABD.
x86 does have a large number of instructions (even more so if you want to count the variants of each), but ARM does not have a small number of instructions and a lot of that instruction complexity stops at the decoder. There’s a whole lot more to a CPU than the decoder.
ARM is load-store and has a relaxed ordering. Whereas x86 has instructions that can read straight from memory, and has Total Store Ordering. ARM also is fixed instruction width, where x86/AMD64 is variable instruction width.
Outside of that the difference is mostly licensing.
The CISC vs RISC thing is dead. Also modern ARM ISAs aren’t even RISC anymore even if that’s what they started out as. People have no idea what’s going on with modern technology.
X86 can actually be quite low power (see LPE cores and Intel Atom). The producers of x86 don’t specialize in that though, unlike a lot of RISC-V and ARM producers. It’s not that it’s impossible, just that it isn’t typically done that way.
So is Reduced Instruction Set like in the old assembly days where you couldn’t do multiplication, as there wasn’t a command for it, so you had to do multiple loops of addition?
Right concept, except you’re off in scale. A MULT instruction would exist in both RISC and CISC processors.
The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.
ARM prominently has an instruction to deal with Javascript. And RISC-V will have those kinds of instructions, too, they’re too useful, saving a massive amount of instructions and cycles and the CPU itself doesn’t really need any logic added, the insn decoder just has to be taught a bit pattern and which microops to emit, the APUs already can do it.
What that instruction will never do in a RISC CPU though is read from memory.
On the flipside, some RISC-V macroops are CISC, fusing memory access and arithmetic. That’s an architecture detail, though, only affecting code to the degree of "if you want to do this stuff, and want it to run faster on some cores, put those instructions in this exact sequence so the core can spot and fuse them).
CISC - complex instruction set - you’ll get really exotic operations, like PMADDWD (multiply numbers, then add 16-bit chunks) or the SSE 4.2 string compare instructions
RISC - reduced instruction set - instead of an instruction for everything, RISC requires users to combine instructions, and specialialized extensions are fairly rare
Modern CISC CPUs often (usually? Always?) have a RISC design behind the CISC interface, it just translates CISC -> RISC for processing. RISC CPUs tend to have more user-accessible cores, so the user/OS handles sending instructions. CISC can be faster for complex operations since you have fewer round-trips to the CPU, whereas RISC can handle more instructions simultaneously due to more cores, so big, diverse workloads may see better throughput. Basically, it’s the old argument of bandwidth vs latency.
Except modern ARM chips are actually CISC too. Also microcode isn’t strictly RISC either. It’s a lot more complex than you are thinking.
There are some RISC characteristics ARM has kept like load-store architecture and fixed width instructions. However it’s actually more complex in terms of capabilities and instructions than pretty much all earlier CISC systems, as early CISC systems did not have vector units and instructions for example.
Yeah, they’ve gotten a bit bloated, but ARM is still a lot simpler than x86. That’s why ARM is usually higher core count, because they don’t have as many specialized circuits. That’s good for some use cases (servers, low power devices, etc), and generally bad for others (single app uses like gaming and productivity), though Apple is trying to bridge that gap.
But yeah, ARM and x86 are a lot more similar today than they were 10 years ago. There’s still a distinct difference though, but RISC-V is a lot more RISC than ARM.
It’s not just a separate product line. It’s a different architecture. Not made by the same companies either, so ARM aren’t involved at all. It’s actually a competitor to ARM64.
Exactly. That’s what I meant by “different product line,” like how Honda makes both cars and motorcycles, they may share similar underlying concepts (e.g. combustion engines), but they’re separate things entirely.
And since RISC-V is open source, the discussion about companies is irrelevant. AMD could make RISC-V chips if it wants, and they do make ARM chips. Same company, three different product lines. Intel also makes ARM chips, so the same is true for them.
Since when did AMD make ARM chips? Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition. They are largely used in the same applications (or might be in the future), although some specific use cases work better with one or the other. Much like how cars can use either petrol or diesel, but say a large ship is better to use compression ignition and a motorcycle to use spark ignition.
Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition.
I tried to keep it relatively simple. They have different use cases like cars vs motorcycles, and those use cases tend to lead to different focuses. We can compare in multiple ways:
X86 like motorcycle:
more torque (higher clock speeds, better IPC)
single or dual rider - fewer, faster cores
less complicated (less stuff on the SOC), but more intricate (more pipelining)
ARM like motorcycle:
simpler engine - less pipelining, smaller area, less complex cooling
simpler accessories - the engine is a SOC, but you can attach a sidecar (coprocessor) or trailer, but your options are pretty limited (unlike x86 where a lot of stuff is still outside the CPU, but that’s changing)
The engines (microarch) aren’t that different, but they target different types of customers. You could throw a big motorcycle engine into a car, and maybe put a small car engine into a motorcycle, but it’s not going to work as well. So the form factor (ISA) is the main difference here.
But yeah, diesel vs gasoline is also a descent example, but that kind of begs the question as to where RISC-V fits in (in my example, it would be a diy engine kit, where it can scale from motorcycles to cars to trucks to ships, if you pick the right pieces).
If you were comparing x86 vs RISC-V you might not be far off. But with ARM vs x86 they have basically the same use cases. Namely desktops, laptops, servers, networking equipment, game consoles, set top boxes, and so on. x86 even used to be used in mobile phones or even as a microcontroller. It’s not used in those applications as much now obviously, but it’s very much possible. Originally ARM was developed for the desktop too, and was designed for high performance. Lookup the Acorn Archimedes. When people say ARM is coming to the desktop they really should be saying ARM is coming back to the desktop, since that’s where it started from.
You’re also not correct on the clock speed and IPC front. For a long time Apple’s ARM implementation had better IPC than x86 chips. The whole point of RISC is that you can get better clock speeds and execute more instructions vs CISC having more complex instructions being executed more slowly. The only really correct part is that x86 chips are more pipelined. This is due to them being CISC essentially and needing more stages to hit the same clockspeed. Apple’s ARM makes up for this by having more superscalar units than x86 chips, allowing for greater IPC.
Putting graphics and video compression stuff on x86 chips isn’t new either. That’s a question of system design, not of x86 vs ARM. In the server market you get ARM chips that are CPU only. Both also come paired with FPGAs. So it’s not even fair to say ARM has more accelerators on chip. Also any ARM chip with PCIe (such as the server ones) can take advantage of the same co-processors that x86 can, the only limitations being drivers and software.
Could someone eli5 risc-v and why the fuss?
Edit: thanks for the replies. Searchingnfurther, this 15 min video is quite well made and told me more than I need to know (for now) https://www.youtube.com/watch?v=Ps0JFsyX2fU
RISC-V (pronounced risk five), is a Free open-source Instruction Set Architecture (ISA). Other well established ISA like x86, amd64 (Intel and AMD) and ARM, are proprietary and therefore, one must pay every expensive licenses to design and build a processor using these architectures. You don’t need to pay a license to build a RISC-V processor, you only need to follow the specifications. That doesn’t mean the CPU design is also free, no, they stay very much the closed property of the designer, but RISC-V represents non the less, a very big step towards more transparency and technology freedom.
I pity the five year old who has to read this.
I’m a grown up though so thank you for the explanation.
Costs less
Yes, I admit it’s still a pretty complex explanation. I gave it my best shot :)
Isn’t it possible to add custom instructions and locking others from them, leading back to the current ARM situation?
I know there are already a number of extensions specified in the specifications, such that Risc-V could be relevant to design the simplest of microcontroller up to the most powerful super computer. I suppose it is possible and allowed to design a CPU with proprietary extensions. What should prevent an ARM type of situation is the fact that so many use-cases are already covered by the open specifications. What is not there yet, to my knowledge, are things like graphics, video, neural-net acceleration.
All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren’t included, video I guess you’d want some special APU sauce for wavelets or whatever (don’t know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you’d want reduced precision but that’s in the pipeline.
Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it’s probably not going to perform at an optimal level.
VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it’ll ever run is hand-written assembly, anyway. Can’t find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it’s also hopeless to use for anything that can’t be streamed linearly from and to memory.
Graphics is the by far most interesting one in my view. That is, it’s a lot general purpose stuff (for GPGPU values of “general purpose”) with only a couple of bits and pieces domain-specific.
The instruction set is a tiny part of the overall CPU architecture. You don’t need to lock it as everything else is proprietary: manufacturing, cores, electric design, etc. Most RISC-V processors today use ARM cores and are subject to ARM licensing.
deleted by creator
RISC-V is like LEGO, where you can put together pieces to make whatever you want. Nobody can tell you what you can or can’t make, you can be as creative as you want. Oh, and there’s motors and stuff too.
ARM is like Hotwheels, there are lots of cars, but you can’t make your own. You can get a bit creative making tracks, but that’s about it.
AMD and Intel are like RC cars, they’re really fun, but they use a lot of batteries and you can’t really customize them. Oh, and they’re expensive, so you only get one.
Each is cool, but with LEGO, you can do everything the others do, and more. Like LEGO, RISC-V can be slow to work with, especially if you don’t have the pieces you want, but the more people that use it, the better it’ll get and the more pieces you can get. And if you have a 3D printer, you can make your own pieces and share them with others.
Right now it’s more like megablocks
“you” as in person with required skills, resources and access to a chip fabrication facility. For many others they can just buy something designed and produced by others, or play around a bit on FPGAs.
We will also see how much variation with RISC-V will actually happen, because if every processor is a unique piece of engineering, it is really hard to write software, that works on every one.
Even with ARM there are arguable too many designs out there, which currently take a lot of effort to integrate.
Sure, and there are more people with that access than just AMD, ARM, NVIDIA, and Intel.
If game devs supported RISC-V, Valve could’ve made the Steam Deck without having to get AMD’s help, which means they would’ve had more options to keep prices down while meeting their performance goals. Likewise for server vendors, phone manufacturers, etc, who currently need to buy from ARM (and fab themselves) or AMD/Intel.
And that’s why I mentioned 3D printing. Making custom 3D models of LEGO pieces is out of reach for many (most?) and even owning a 3D printer is out of reach for many. I have one, but I’ve only built a handful of things because it’s time consuming.
As it gets more software support, we should see a lot more variety in RISC-V chips. We’re not there yet, but we should be excited because it’s starting to get traction, and the future looks bright.
It also means that anyone can make their own instruction set extensions or just some custom modifications, which would make software much more difficult to port. You would have to patch your compiler for every individual chip, if you even figure out what those instructions are, and what they do. Backwards, forwards or sideway (to other cpus from other vendors) compatibility takes effort, and not everyone will try to have that, and instead add their own individual secret sauce to their instruction set.
IMO, I am excited about RISC-V, but if the license doesn’t force adopters to open their designs under an open source license as well, I do expect even more portability issues as we already have with ARM socs.
Compilers basically already do that, and distributed executables usually assume minimal instruction support. Compilers can detect what’s supported, so it’s largely a solved problem, at least if you compile things yourself.
I really wish that an affordable desktop chip fab was a thing. Maybe with graphene semiconductors it could be feasible.
It’s affordable today, but only for big orders in the millions (e.g. someone like Valve is big enough).
It would be super cool if small batches (hundreds) were feasible, but I don’t think there’s much demand there since that’s where FPGAs come in.
This is a great answer.
That’s not entirely true. There are companies that have the ARM achitecture license, like Apple or Cavium (now bought by Marvell). They are allowed to make their own hotwheels using the spring system or the wheels or whatever.
Not an eli5 because I’m still not caught up on it but if my memory serves, RISC-V is an open source architecture for processors, basically like amd64 or arm64, actually I’m pretty sure ARM’s chips are RISC derivatives.
Edit: correcting my comment, ARM makes RISC chips, not RISC-V
ARM and RISC-V are entirely different in that neither one is based on the other, but what they have in common is that they’re both RISC (Reduced Instruction Set Computing) architectures. RISC is what makes ARM CPUs (in your phone, etc) so efficient and hopefully RISC-V will get there too.
x86 by comparison is Complex Instruction Set Computing, which allows for more performance in some cases, but isn’t as efficient.
The original debate from the 80s that defined what RISC and CISC mean has already been settled and neither of those categories really apply anymore. Today all high performance CPUs are superscalar, use microcode, reorder instructions, have variable width instructions, vector instructions, etc. These are exactly the bits of complexity RISC was supposed to avoid in order to achieve higher clock speeds and therefore better performance. The microcode used in modern CPUs is very RISC like, and the instruction sets of ARM64/RISC-V and their extensions would have likely been called CISC in the 80s. All that to say the whole RISC vs CISC thing doesn’t really apply anymore and neither does it explain any differences between x86 and ARM. There are differences and they do matter, but by an large it’s not due to RISC vs CISC.
As for an example: if we compare the M1 and the 7840u (similar CPUs on a similar process node, one arm64 the other AMD64), the 7840u beats the M1 in performance per watt and outright performance. See https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_7_7840u-vs-apple_m1. Though the M1 has substantially better battery life than any 7840u laptop, which very clearly has nothing to do with performance per watt but rather design elements adjacent to the CPU.
In conclusion the major benefit of ARM and RISC-V really has very little to do with the ISA itself, but their more open nature allows manufacturers to build products that AMD and Intel can’t or don’t. CISC-V would be just as exciting.
compressed instruction set /= variable-width. x86 instructions are anything from one to a gazillion bytes, while RISC-V is four bytes or optionally (very commonly supported) two bytes. Much easier to handle.
RISC-V is (as far as I’m aware) the first ISA since Cray to use vector instructions. Certainly the only one that actually made a splash. SIMD isn’t vector instructions, most crucially with vector insns the ISA doesn’t care about vector length on an opcode level. That’s like if you wrote MMX code back in the days and if you run the same code now on a modern CPU it’s using just as wide registers as SSE3.
But you’re right the old definitions are a bit wonky nowadays, I’d say the main differentiating factor nowadays is having a load/store architecture and disciplined instruction widths. Modern out-of-order CPUs with half a gazillion instructions of a single thread in flight at any time of course don’t really care about the load/store thing but both things simplify insn decoding to ludicrous degrees, saving die space and heat. For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).
Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley. RISC I and II were the originals, II is what all the other RISC architectures were inspired by, III was a Smalltalk machine, IV Lisp. Then a long time nothing, then lecturers noticed that teaching modern microarches with old or ad-hoc insn sets is not a good idea, x86 is out of the question because full of hysterical raisins, ARM is actually quite clean but ARM demands a lot, and I mean a lot of money for the right to implement their ISA in custom silicon, so they started rolling their own in 2010. Calling it RISC V was a no-brainer.
Oh for sure, but before the days of super-scalars I don’t think the people pushing RISC would have agreed with you. Non-fixed instruction width is prototypically CISC.
If you can simplify the instruction decoding that’s always a benefit - moreso the more cores you have.
You’ll get no disagreement from me on that. Maybe you misunderstood what I meant by “CISC-V would be just as exciting”? I meant that if there was a popular, well designed, open source CISC architecture that was looking to be the eventual future of computing instead of RISC-V then that would be just as exciting as RISC-V is now.
Thank you so much for this information.
If you still have commenting motivation, what are the top 5 differences between x86 and ARM?
Up until your post I had thought it exactly was the size of the instruction set with x86 having lots of very specific multi-step-in-a-single instruction as well as crufty instruction for backwards compatibility (like MPSADBW).
I’m more familiar with RISC-V than I am with ARM though it’s my understanding they’re quite similar.
ARM/RISC-V are load-store architectures, meaning they divide instructions between loading/storing and doing computation. x86 on the other hand is a register-memory architecture, having instructions that do both computation as well as loading/storing.
ARM/RISC-V also have weaker guarantees as to memory ordering allowing for less synchronization between cores, however RISC-V has an extension to enforce the same guarantees as x86 and Apple’s M-series CPU have a similar extension for ARM. If you want to emulate x86 applications on ARM/RISC-V these kinds of extensions are essential for performance.
ARM/RISC-V instructions are variable width but only in a limited sense. They have “compressed instructions” - 2 bytes instead of 4 - to increase instruction density in order to compete with x86’s true variable width instructions. They’re fairly close in instruction density, though compressed instructions are annoying for compilers to handle due to instruction alignment. 4 byte instructions must be aligned to 4 bytes, so if you have 3 instructions A, B and C but only B has a compressed version then you can’t actually use it because there must be 4 bytes between instructions A and C.
ARM/RISC-V also makes backwards compatibility entirely optional, Apple’s M-series don’t implement 32-bit mode for instance, whereas x86-64 still has “real mode” for running 16 bit operating systems.
There’s also a number of other differences, like the number of registers, page table formats, operating modes, etc, but those are the more fundamental ones I can think of.
The MPSADBW thing likely comes from the hackaday article on why “x86 needs to die”. The kinda funny thing about that is MPSADBW is actually a really important instruction for (apparently) video decoding; ARM even has a similar instruction called SABD.
x86 does have a large number of instructions (even more so if you want to count the variants of each), but ARM does not have a small number of instructions and a lot of that instruction complexity stops at the decoder. There’s a whole lot more to a CPU than the decoder.
ARM is load-store and has a relaxed ordering. Whereas x86 has instructions that can read straight from memory, and has Total Store Ordering. ARM also is fixed instruction width, where x86/AMD64 is variable instruction width. Outside of that the difference is mostly licensing.
The CISC vs RISC thing is dead. Also modern ARM ISAs aren’t even RISC anymore even if that’s what they started out as. People have no idea what’s going on with modern technology.
X86 can actually be quite low power (see LPE cores and Intel Atom). The producers of x86 don’t specialize in that though, unlike a lot of RISC-V and ARM producers. It’s not that it’s impossible, just that it isn’t typically done that way.
So is Reduced Instruction Set like in the old assembly days where you couldn’t do multiplication, as there wasn’t a command for it, so you had to do multiple loops of addition?
Right concept, except you’re off in scale. A MULT instruction would exist in both RISC and CISC processors.
The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.
ARM prominently has an instruction to deal with Javascript. And RISC-V will have those kinds of instructions, too, they’re too useful, saving a massive amount of instructions and cycles and the CPU itself doesn’t really need any logic added, the insn decoder just has to be taught a bit pattern and which microops to emit, the APUs already can do it.
What that instruction will never do in a RISC CPU though is read from memory.
On the flipside, some RISC-V macroops are CISC, fusing memory access and arithmetic. That’s an architecture detail, though, only affecting code to the degree of "if you want to do this stuff, and want it to run faster on some cores, put those instructions in this exact sequence so the core can spot and fuse them).
Nah, the Complex instructions are ridiculously complex and the Reduced ones can still do a lot of stuff.
RISC-V is modular, so multiplication is optional but probably everything will support it.
ARM = Advanced RISC Machine
However, RISC-V is specific type of RISC and ARM is not a derivative of RISC-V but of RISC.
Originally Acorn RISC Machine before that
To clarify for those that might not understand that explanation, RISC is just a type of instruction set, x86 is CISC, but arm and RISC-V are RISC
Yup. In general:
Modern CISC CPUs often (usually? Always?) have a RISC design behind the CISC interface, it just translates CISC -> RISC for processing. RISC CPUs tend to have more user-accessible cores, so the user/OS handles sending instructions. CISC can be faster for complex operations since you have fewer round-trips to the CPU, whereas RISC can handle more instructions simultaneously due to more cores, so big, diverse workloads may see better throughput. Basically, it’s the old argument of bandwidth vs latency.
Except modern ARM chips are actually CISC too. Also microcode isn’t strictly RISC either. It’s a lot more complex than you are thinking.
There are some RISC characteristics ARM has kept like load-store architecture and fixed width instructions. However it’s actually more complex in terms of capabilities and instructions than pretty much all earlier CISC systems, as early CISC systems did not have vector units and instructions for example.
Yeah, they’ve gotten a bit bloated, but ARM is still a lot simpler than x86. That’s why ARM is usually higher core count, because they don’t have as many specialized circuits. That’s good for some use cases (servers, low power devices, etc), and generally bad for others (single app uses like gaming and productivity), though Apple is trying to bridge that gap.
But yeah, ARM and x86 are a lot more similar today than they were 10 years ago. There’s still a distinct difference though, but RISC-V is a lot more RISC than ARM.
Arm’s chips are not RISC-V derivatives.
Yup, they’re RISC chips (few instructions), but RISC-V is a separate product line.
It’s not just a separate product line. It’s a different architecture. Not made by the same companies either, so ARM aren’t involved at all. It’s actually a competitor to ARM64.
Exactly. That’s what I meant by “different product line,” like how Honda makes both cars and motorcycles, they may share similar underlying concepts (e.g. combustion engines), but they’re separate things entirely.
And since RISC-V is open source, the discussion about companies is irrelevant. AMD could make RISC-V chips if it wants, and they do make ARM chips. Same company, three different product lines. Intel also makes ARM chips, so the same is true for them.
Since when did AMD make ARM chips? Also they aren’t as different as a motorcycle and a car. It’s more like compression ignition vs spark ignition. They are largely used in the same applications (or might be in the future), although some specific use cases work better with one or the other. Much like how cars can use either petrol or diesel, but say a large ship is better to use compression ignition and a motorcycle to use spark ignition.
At least 10 years now, and they’re preparing to make ARM PC chips.
I tried to keep it relatively simple. They have different use cases like cars vs motorcycles, and those use cases tend to lead to different focuses. We can compare in multiple ways:
X86 like motorcycle:
ARM like motorcycle:
The engines (microarch) aren’t that different, but they target different types of customers. You could throw a big motorcycle engine into a car, and maybe put a small car engine into a motorcycle, but it’s not going to work as well. So the form factor (ISA) is the main difference here.
But yeah, diesel vs gasoline is also a descent example, but that kind of begs the question as to where RISC-V fits in (in my example, it would be a diy engine kit, where it can scale from motorcycles to cars to trucks to ships, if you pick the right pieces).
If you were comparing x86 vs RISC-V you might not be far off. But with ARM vs x86 they have basically the same use cases. Namely desktops, laptops, servers, networking equipment, game consoles, set top boxes, and so on. x86 even used to be used in mobile phones or even as a microcontroller. It’s not used in those applications as much now obviously, but it’s very much possible. Originally ARM was developed for the desktop too, and was designed for high performance. Lookup the Acorn Archimedes. When people say ARM is coming to the desktop they really should be saying ARM is coming back to the desktop, since that’s where it started from.
You’re also not correct on the clock speed and IPC front. For a long time Apple’s ARM implementation had better IPC than x86 chips. The whole point of RISC is that you can get better clock speeds and execute more instructions vs CISC having more complex instructions being executed more slowly. The only really correct part is that x86 chips are more pipelined. This is due to them being CISC essentially and needing more stages to hit the same clockspeed. Apple’s ARM makes up for this by having more superscalar units than x86 chips, allowing for greater IPC.
Putting graphics and video compression stuff on x86 chips isn’t new either. That’s a question of system design, not of x86 vs ARM. In the server market you get ARM chips that are CPU only. Both also come paired with FPGAs. So it’s not even fair to say ARM has more accelerators on chip. Also any ARM chip with PCIe (such as the server ones) can take advantage of the same co-processors that x86 can, the only limitations being drivers and software.
While I’m with you in general, the “different product lines” analogy really doesn’t work well and it’d probably be best just to abandon it :)
deleted by creator