Tech Gaming Hardware discussion (& Hardware Sales) thread

Why I need so many cores? For gaming future proofing. I like to spend money once for a long time.
Inadvisable strategy.

This strategy only had some argument back with AMD's FX series, for example, after pricing on those CPUs crashed, and suddenly you were comparing CPUs like the FX-8350 for ~$130 against i3's that were nearly equal in gaming performance, but lacked that long-term upside. That's not the case, anymore. In games, the 10900K isn't really doing any better against the 10700K, and the 3950X isn't doing any better against the 3700X, than they already were at launch. Sure, in 4-5 years, the 10900K and 3950X will probably hang on longer to sustain minimums before falling off, thanks to those extra cores, but is that what spending up to $2000 is intended to secure?

Consider pricing of the just announced Zen 4:
  • $699 = R9-7950X
  • $549 = R9-7900X
  • $399 = R7-7700X
  • $299 = R5-7600X
Now look at Techspot's gaming roundup.
Average.png


Why spend an extra $300-$400 on the 7950X for an extra ~1% gaming performance over the 7700X or 7600X? Instead of spending $700 on the 7950X, buy the 7700X for $400, and then in a few years upgrade to the future Ryzen 7 for $400-$450. That future Ryzen 7 will undoubtedly shit on the 7950X in gaming. It's a more efficient distribution of funds that will achieve a far higher average gaming performance over the span of the years for which you hope to be "future-proofed".
 
So AMD and Intel are pretty much in a dead heat with DDR5 and Intel the only one currently offering DDR4 and DDR5 support in 12gen and 13gen motherboards. Also Intel according to their updated testing gives the 13gen i9 the single core performance crown and puts it in a dead heat when running multithreaded workloads.

 


Nvidia opened the door Intel don't drop the ball.
 
So AMD and Intel are pretty much in a dead heat with DDR5 and Intel the only one currently offering DDR4 and DDR5 support in 12gen and 13gen motherboards. Also Intel according to their updated testing gives the 13gen i9 the single core performance crown and puts it in a dead heat when running multithreaded workloads.


No 13400 announced.

<34>
 
The most useful comparison right now is just the most basic ark & pricing. Here that is:
Resize

full


Things are looking brighter for Intel than I expected. Maybe the most interesting revelation is that while the 13900K will have to contend with the 7950X for bragging rights, it's actually head-to-head with the 7900X as a matter of pricing. On top of that, as had been rumored, the memory support is far superior on Intel's side, so we can expect their CPU's performance to scale better with faster DDR5 RAM.

I'm still skeptical gaming performance will be as good as they boast, but it's credible. I was a huge fan of them pioneering the big.LITTLE approach in the x86 space last generation, and it appears that could be critical for gaming this gen. The reason is these CPUs will also scale according to power draw; if you recall I mentioned that was the most significant aspect of GN's review of Zen 4. This means heat is the limiting the factor. And while Zen 4's superior power core count looks great on paper, in reality, it appears Intel's power cores may be able to sustain a higher average turbo frequency across the top 2-6 cores, the most meaningful for gaming performance, because the hybrid design offers more flexibility for the CPU to handle the task load. This could explain why it's winning early samples we're seeing on Passmark for single threaded performance, but losing the all-thread score, which was repeated by some early actual benchmark leaks someone else shared.
 
Weird. Tom’s has them beating intel in most games. They used a 3090 instead of a 3080 for the test. I wouldn’t think that would change the results much though
Some users on the Toms hardware forum were complaining about 12th gen not being updated and using slower ram (Zen 4 doesn't have a huge performance hit going down to 4800/5200 were 12th gen does).

The 3 reviews (Wccftech, Eurogamer, Techpowerup) that used the exact same ram (DDR5-6000) for both Alderlake and Zen 4 all had the exact same result, Zen 4 wasn't faster in gaming
No 13400 announced.

<34>
Non-K chips along with B760 boards won't be announced till CES in January
 
So why exactly did AMD release the 5800X3D earlier this year? It's $300 cheaper than the 7950.
 
So why exactly did AMD release the 5800X3D earlier this year? It's $300 cheaper than the 7950.
To fight off Alder Lake.

Obviously Zen 4 isn't quite as impressive as would be ideal for their market positioning.
Some users on the Toms hardware forum were complaining about 12th gen not being updated and using slower ram (Zen 4 doesn't have a huge performance hit going down to 4800/5200 were 12th gen does).

The 3 reviews (Wccftech, Eurogamer, Techpowerup) that used the exact same ram (DDR5-6000) for both Alderlake and Zen 4 all had the exact same result, Zen 4 wasn't faster in gaming
RAM has never been as important as it appears it will be this generation. The RAM scaling is going to be the divider.
 
We've been waiting a long time for this. Intel has finally unveiled the release date and price of its upcoming Arc A770 graphics card: October 12 and $329.

Since the A770 is reportedly going to face off against the RTX 3060 in performance(opens in new tab)—Intel says it will be a match or better in at least DX12 and Vulkan games—that price makes a lot of sense. It's actually the exact same MSRP as Nvidia's popular budget GPU, though Intel holds that no RTX 3060s are actually available for that price, instead noting a price around $418 for that card.

https://www.pcgamer.com/intel-arc-a770-price-release-date/
 
The most useful comparison right now is just the most basic ark & pricing. Here that is:
Resize

full


Things are looking brighter for Intel than I expected. Maybe the most interesting revelation is that while the 13900K will have to contend with the 7950X for bragging rights, it's actually head-to-head with the 7900X as a matter of pricing. On top of that, as had been rumored, the memory support is far superior on Intel's side, so we can expect their CPU's performance to scale better with faster DDR5 RAM.

I'm still skeptical gaming performance will be as good as they boast, but it's credible. I was a huge fan of them pioneering the big.LITTLE approach in the x86 space last generation, and it appears that could be critical for gaming this gen. The reason is these CPUs will also scale according to power draw; if you recall I mentioned that was the most significant aspect of GN's review of Zen 4. This means heat is the limiting the factor. And while Zen 4's superior power core count looks great on paper, in reality, it appears Intel's power cores may be able to sustain a higher average turbo frequency across the top 2-6 cores, the most meaningful for gaming performance, because the hybrid design offers more flexibility for the CPU to handle the task load. This could explain why it's winning early samples we're seeing on Passmark for single threaded performance, but losing the all-thread score, which was repeated by some early actual benchmark leaks someone else shared.

I wonder how long it will be until we see an Intel all E-Core cpu. If they're hitting 3.9 on the E-cores, they'd be good enough for a Celeron or even a Pentium.
 
We've been waiting a long time for this. Intel has finally unveiled the release date and price of its upcoming Arc A770 graphics card: October 12 and $329.
Since the A770 is reportedly going to face off against the RTX 3060 in performance(opens in new tab)—Intel says it will be a match or better in at least DX12 and Vulkan games—that price makes a lot of sense. It's actually the exact same MSRP as Nvidia's popular budget GPU, though Intel holds that no RTX 3060s are actually available for that price, instead noting a price around $418 for that card.

https://www.pcgamer.com/intel-arc-a770-price-release-date/
Problem is the 6600 XT (which is better than the 3060) is under $300 now.
 
I wonder how long it will be until we see an Intel all E-Core cpu. If they're hitting 3.9 on the E-cores, they'd be good enough for a Celeron or even a Pentium.

His cat is more chill then his motherboard lol.
 
Problem is the 6600 XT (which is better than the 3060) is under $300 now.
I must confess I'm confused.

As you know, I always refer to the architecture. Techpowerup has a page for the Intel Arc A770. As you can see, it has been updated today, and this includes the price. Originally it was rumored to target the $400-$450 range, and most recently leaks rumored it to be $389, so I'm wondering if Intel saw gamers' disgust with the NVIDIA launch, and decided to get even more aggressive with pricing. But what has me confused is the specs:
https://www.techpowerup.com/gpu-specs/arc-a770.c3914
Everything (we can see) appears to be confirmed:
https://ark.intel.com/content/www/us/en/ark/products/227955/intel-arc-a770-graphics-8gb.html

This is what has me confused. Those specs are way, way beyond the RTX 3060 to which Intel is comparing it. And these specs appear to be set. Techpowerup maintains a red flag "This product is not released yet" notice for when specs are still based on leaks, as you can see with the Arc 580, but they've taken that away, and obviously Intel has their own ark page there with the launch just weeks away. The clocks, the number of Xe cores, the VRAM size & bandwidth...it's all there.

We're not talking about the confusing, complicated on-paper comparisons where GPUs trade blows, so you don't really know how it will suss out until the hardware is actually benchmarked. I mean the A770 utterly shits on the 3060 in every regard. Shits on it. The only superiority for the 3060 is tensor cores, and in gaming, that only matters for DLSS. Perhaps most significantly, the VRAM bandwidth is 42% faster than the 3060. This bandwidth also happens to be over double that of the RX 6600, and exactly double the RX 6600 XT.

On paper, it's closest analogue is the RTX 3070 Ti on the NVIDIA side, and the RX 6800 on the AMD side (per the latter, Intel also appears to have a 16GB version of the A770 in store).

So...are Intel's drivers really that bad? Are they costing them that much performance even in DX12 and Vulkan? If so, that's insane.
 
I must confess I'm confused.

As you know, I always refer to the architecture. Techpowerup has a page for the Intel Arc A770. As you can see, it has been updated today, and this includes the price. Originally it was rumored to target the $400-$450 range, and most recently leaks rumored it to be $389, so I'm wondering if Intel saw gamers' disgust with the NVIDIA launch, and decided to get even more aggressive with pricing. But what has me confused is the specs:
https://www.techpowerup.com/gpu-specs/arc-a770.c3914
Everything (we can see) appears to be confirmed:
https://ark.intel.com/content/www/us/en/ark/products/227955/intel-arc-a770-graphics-8gb.html

This is what has me confused. Those specs are way, way beyond the RTX 3060 to which Intel is comparing it. And these specs appear to be set. Techpowerup maintains a red flag "This product is not released yet" notice for when specs are still based on leaks, as you can see with the Arc 580, but they've taken that away, and obviously Intel has their own ark page there with the launch just weeks away. The clocks, the number of Xe cores, the VRAM size & bandwidth...it's all there.

We're not talking about the confusing, complicated on-paper comparisons where GPUs trade blows, so you don't really know how it will suss out until the hardware is actually benchmarked. I mean the A770 utterly shits on the 3060 in every regard. Shits on it. The only superiority for the 3060 is tensor cores, and in gaming, that only matters for DLSS. Perhaps most significantly, the VRAM bandwidth is 42% faster than the 3060. It also happens to be over double that of the RX 6600, and exactly double the RX 6600 XT.

On paper, it's closest analogue is the RTX 3070 Ti on the NVIDIA side, and the RX 6800 on the AMD side (per the latter, Intel also appears to have a 16GB version of the A770 in store).

So...are Intel's drivers really that bad? Are they costing them that much performance even in DX12 and Vulkan? If so, that's insane.
Drivers were a huge problem back in the day with their i740 video card.
 
Drivers were a huge problem back in the day with their i740 video card.
There's been rumors Intel has had horrible struggles with them. I suspect you've probably seen GN's video about it. I recall a story somewhere in the past few months where a high ranking Intel employee-- either an engineer or an executive-- openly confessed 'drivers are the real challenge', not the hardware, in breaking into the GPU game.

But part of me is so hopeful Intel is underselling everything to keep the media hype train at bay just before dropping a John Holmes-sized dick on the table to announce their presence. A WWE-level entrance. Oh, how a boy can hope.
 
why do the AMDs have so much more cache?
That's just how they designed their core. Larger cache pools require a larger portion of the total die space. However, their overall cache pool isn't that much larger, and they have a lot less of the faster cache.

For example, the 7950X has 64MB L3 Cache, and 16MB L2 Cache (80MB total). The 7900X has 64MB L3 Cache, and 12MB L2 Cache (76MB total). Meanwhile, the 13900K only has 36MB L3 Cache, but a whopping 32MB of the faster L2 cache (68MB total). So it has 11%-15% less total cache than its similarly-priced & flagship counterparts, but 2.67x-2.00x of the speedier cache.
 
There's been rumors Intel has had horrible struggles with them. I suspect you've probably seen GN's video about it. I recall a story somewhere in the past few months where a high ranking Intel employee-- either an engineer or an executive-- openly confessed 'drivers are the real challenge', not the hardware, in breaking into the GPU game.

But part of me is so hopeful Intel is underselling everything to keep the media hype train at bay just before dropping a John Holmes-sized dick on the table to announce their presence. A WWE-level entrance. Oh, how a boy can hope.

I find it a bit unlikely that they are going to come crashing through in the next few months and I hope the rumours that they aren't sticking it out are untrue. It'd be great to have a third player in the GPU market.

Where I think Intel might have the biggest problems is compatability with older games - these are going to probably be their lowest priority (as compared to drivers for new games!) and where I spend a lot of time gaming...
 
That's just how they designed their core. Larger cache pools require a larger portion of the total die space. However, their overall cache pool isn't that much larger, and they have a lot less of the faster cache.

For example, the 7950X has 64MB L3 Cache, and 16MB L2 Cache (80MB total). The 7900X has 64MB L3 Cache, and 12MB L2 Cache (76MB total). Meanwhile, the 13900K only has 36MB L3 Cache, but a whopping 32MB of the faster L2 cache (68MB total). So it has 11%-15% less total cache than its similarly-priced & flagship counterparts, but 2.00x-2.67x of the speedier cache.

Thanks. I think I'm going to hold out for Raptor lake, and maybe do a 4080/4070 if the prices correct... tho not sure if or when taht will happen.
 
Back
Top