The reason in the past (before RDNA) that's CU's don't scale linearly is because of their inefficiences to do work in each clock cycle, I posted a article about it around here somewhere. That changed with RDNA which is why we are seeing cards like 5700 and 5700 XT(36 and 40 CU's) perform better than Vega GPU's with 56 CU and 64 CU.
Thanks for the info! I believe a lot of people are making a lot of assumptions based on how things used to work.
I heard from a former dev on a podcast say on paper, Sonys IO, memory setup and non-reliance of DX12 makes PS5 look like a better console for developers. However he also admitted that without taking a dive into how all the new methods worked, knowing where the new bottlenecks are, etc...he couldn't say that for sure. It was just his gut reaction after listening to Cerny speak and not having the same level of detail from Microsoft. I think a lot of the misinformation is coming from devs listening to Cerny talk about efficiencies and removing bottlenecks without the understanding Series X does similar things at a minimum.
Edit: Also does anyone know if DX12U changes the gap between DX and Vulcan? It's interesting that Microsoft is getting beat so clearly in an area that should be their wheelhouse.
Last edited: