Finally, now I can afford the 5800x3D.
what game can’t be ran by a 5800x3D ? if anything I feel like graphic cards are the biggest bottle neck right now
Dragons Dogma 2 is notoriously CPU hungry.
Almoast any paradox game , except for maybe victoria 3.
Simulators and games with mods can push the cpu. But yeah. Mostly gpu limited.
The gpu has been the gaming bottleneck for decades.
Yup. I have no trouble running modern games on my Ryzen 5600, which doesn’t even have the massive cache of the 3D chips. I’m not spending >$1k on a GPU, so my CPU is likely more than sufficient for quite a while.
Escape from Tarkov. If you want 120+ fps on streets you pretty much need a 7800x3d.
I look forward to watching a Gamers Nexus review of this. I hope it’s as good as they say. 😀
I’m an antifan of Apple but the M4 Max is supposed to be faster than any x86 desktop CPU, and use a lot less power. That’s per geekbench 6. I’d be interested in seeing other measurements.
Geekbech is as useful as a metric as an umbrella on a fish. Also the M4 max will not consume less energy than the competition. That is a misconception arising from the lower skus in mobile devices. The laws of physics apply to everyone, at the same reticle size the energy consumption in nT worlkloads is equivalent. The great advantage of Apple is that they are usually a node ahead and the eschewing of legacy compatibility saves space and thus energy in the design that can be leveraged to reduce power consumption on idle or 1T. Case in point, Intel’s latest mobile CPUs.
The laws of physics apply to everyone
That is obviously true, but a ridiculous argument, there are plenty examples of systems performing better and using less power than the competition.
For years Intel chips used twice the power for similar performance compared to AMD Ryzen. And in the Buldozer days it was the same except the other way around.Arm has designed chips for efficiency for a decade before the first smartphones came out, and they’ve kept their eye on the ball the entire time since.
It’s no wonder Arm is way more energy efficient than X86, and Apple made by far the best Arm CPU when M1 arrived.The great advantage of Apple is that they are usually a node ahead
Yes that is an advantage, but so it is for the new Intel Arrow Lake compared to current Ryzen, yet Arrow Lake use more power for similar performance. Despite Arrow Lake is designed for efficiency.
It’s notable that Intel was unable to match Arm on power efficiency for an entire decade, even when Intel had the better production node. So it’s not just a matter of physics, it is also very much a matter of design. And Intel has never been able to match Arm on that. Arm still has the superior design for energy efficiency over X86, and AMD has the superior design over Intel.
Intel has had a node disadvantage regarding Zen since the 8700K… From then on the entire point is moot.
From then on the entire point is moot.
No it’s not, because the point is that design matters. When Ryzen came out originally, it was far more energy efficient than the Intel Skylake. And Intel had the node advantage.
https://www.techpowerup.com/review/intel-core-i7-8700k/16.html
https://www.techpowerup.com/cpu-specs/core-i7-6700k.c1825
Ryzen was not more efficient than skylake. In fact, the 1500x was actually consuming more energy in nT workloads than skylake while performing worse, which is consistent with what I wrote. What Ryzen was REALLY efficient at was being almost as fast as skylake for a fraction of the price.
https://www.notebookcheck.net/Apple-M3-Max-16-Core-Processor-Benchmarks-and-Specs.781712.0.html
Will you look at that, in nT workloads the M3 Max is actually less efficient than competitors like the ryzen 7k hs. The first N3 products had less than ideal yields so apple went with a less dense node thus losing the tech advantage for one generation. That can be seen in their laughable nT performance/watt. Design does matter however, and in 1T workloads Apple’s very wide design excells by performing very well while consuming lower energy, which is what I’ve been saying since this thread started.
Power consumption is not efficiency, PPW is.
Tell me you didn’t open the links without telling me you didn’t open the links. Have a nice day friend.
Not to mention ARM chips which by and large were/are more efficient on the same node than x86 because of their design: ARM chip designers have been doing that efficiency thing since forever, owing to the mobile platform, while desktop designers only got into the game quite late. There’s also some wibbles like ARM insn decoding being inherently simpler but big picture that’s negligible.
Intel just really, really has a talent for not seeing the writing on the wall while AMD made a habit out of it out of sheer necessity to even survive. Bulldozer nearly killed them (and the idea itself wasn’t even bad, it just didn’t work out) while Intel is tanking hit after hit after hit.
Exactly, the apple chips excel at low power tasks and will consume basically nothing doing them. It’s also good for small bursty tasks, but for long lived intensive tasks it behaves basically the same as an equivalent x86 chip. People don’t seem to know that these chips can easily consume 80-90W of power when going full tilt.
The new Intel Arrow Lake is supposed to max out at 150W, but it doesn’t. And that’s still almost 40% better than previous gen Intel!
So hovering around 80-90W max is pretty modest by today’s standards.Oh of course, the apple chips are faster, and this is likely a combination of more efficiency thanks to the newer process node and apple being able to optimize the chips and power draw much better because they make everything. Apple can also afford to use larger chips because they make a profit on the entire computer, not just the processor itself.
That’s impressive, or should I say scary? 150w is a lot of heat to dissipate… I hope those aren’t laptop chips…
The 14900k is an absolute oven
No but the M4 Max is claimed to be as fast, and Intel improved their chip, so it’s down from 250W for previous gen! And the M4 Max is faster.
We’re condemned to suffer uninformed masses on this. Zen 5 mobile is on N4p at 143transistors/um2, the M4max is on N3E at 213transistors/um2. That’s a gigantic advantage in power savings and logic per mm2 of die. Granted, I don’t think the chiplet design will ever reach ARM levels of power gating but that’s a price I’m willing to pay to keep legacy compatibility and expandable RAM and storage. That IO die will always be problematic unless they integrate it in the SOC but I’d prefer if they don’t. (Integration also has power saving advantages, just look at Intel’s latest mobile foray)
I’d consider educating yourself more on this topic.
You made me chuckle.
Thank you for that.
Ah the reddit nostalgia ❤
While the 9000 series looks decent, I honestly think Intel has a really interesting platform to build off of with the core ultra chips. It feels like Intel course correcting with poor decisions made for the 13th and 14th gen chips. Wendel from Level1 techs made a really good video about the good things Intel put into the chips while also highlighting some of the bad things, things like a built-in NPU and how they’re going to use that to pull in profiles for applications and games with ML, or the fact that performance variance occurs between chipset makers more often with the core ultra. It’s basically a step forwards in tech but a step backwards in price/performance.
Work at a tech store; the technicians that build the PCs for customers recently tried building with the new Core Ultra 7 256K. Two processors were dead or unstable right out of thr box. Tried with known good RAM, two different cpus on two different motherboards. It seems that Intel hasn’t really fixed their stability issue, which should be their first concern.
Well I didn’t say they were perfect.
So long as they stopped building the ram in and losing $16,000,000,000 in a fiscal year.
Is 20% faster than intel a step up, generation on generation?
the main benefit on the performance increase from zen4 to zen 5 is the reordering of the cache and chip layers allowed them to clock the cores higher, as one of the biggest bottlenecks for older x3d designs was clocks, due to the chip internally insulating a lot of the heat, so their clocks were stepped back from their non x3d counterparts.
the 9800x3d base and turbo clocks are a generous step up from previous gen, and likely the biggest contributing factor to the performamce increase when reviews drop.
It’ll be a step up from the 7800x3d, but how much is a question. The 9000 series in general has been a disappointment in terms of the gains that were expected, but it does show some kind of gain. There’s reason to think those issues are fixable. Linux performance does show a decent uplift, for one, which has not been the case with Intel’s Arrow Lake chips.
I know people meme about “Zen 5%” (sidenote: genuinely a clever quip), but most of that is down to AMD massively reducing the power draw of the chips.
If you set it to the same power limits as Zen4, you can get large performance improvements.
Gamers have been saying for years that stuff is getting too power-hungry, but when steps were made to reverse this, they collectively lost their minds.
Seriously, what are they expecting, a 25% improvement in performance at half the power draw, while staying on a 5nm-family node?
AMD were dumb for thinking gamers give even the slightest fuck about power usage. Gamers would much more readily accept a CPU going from 120W to 500W if it meant an imaginary +20% perf uplift over a CPU going from 120W to 70W with a +5% perf uplift. I say imaginary because nobody with a high end CPU and a 4090 actually plays their games at 1080p low.
There are gamers and there are gamers.
Some gamers prefer not to have the level of noise of a jet engine taking off right next to them to get a couple percent more frames per second on a game.
I would say there are at least two quite different markets amongst PC gamers who have different preferred balances between performance and the downsides of it (noise, heat, power costs), a bit like not all people who enjoy driving want muscle cars.
I couldn’t quite understand why people were memeing on Zen 5. It’s 5% performance increase while at much lower TDP, what is there not to like? Efficiency is plenty important. And even if we could see a 20% performance increase while using more power, is that worth it? What are the true benefits of a 20% faster CPU when considering pure gaming while we are already at the top of the spec sheet? The games where the difference would be a massive number of FPS are those like CS2 where you would go from 600 to 720 fps, does that truly matter? I like my pcs running as efficient as possible, that way I know they’ll last longer.
To them it probably is. I’ve seen literal posts ( or GitHub comments - I forgot ) where they are raging their fps dropped from 420 to 370 with the latest patch and that the game is now completely unplayable!
They have a point complaining because the patch had a big fps drop, but the game is unplayable? At 370fps? Gtfo xD.
There’s people playing on a lot less than that.
So the smart move here for AMD would have been to bin the chips differently according to their tested stability for power usage, like Intel T SKUs. It’s the same chip, but the “X” versions are running at full power (with bios options to turn it down to be more efficient, or aggressively scale power delivery, or what have you), and “E” versions that just always run at lower voltages and currents.
I agree that cutting TDP nearly in half while STILL pulling out a perf gain is remarkable, but also not something most gamers are going to care much about in the context of a desktop system.