Official Thread Pillow Fight that nobody wins with MOAR Jackie Chan and guys comfortable with STRETCHING their sexuality!

Status
Not open for further replies.
In this day and age this console warz bulls*** is played out. People in here acting like children with Val leading the charge.

A great time for all gamers coming soon.
 
  • Agree
Reactions: Kvally
In this day and age this console warz bulls*** is played out. People in here acting like children with Val leading the charge.

A great time for all gamers coming soon.

Maybe it's because our board isn't that big but I don't see anywhere near as much of it here as I do on neogaf and resetera.
 
Yup. So they’re both different. How odd.



AMD’s RDNA2 Next-Generation Architecture For Radeon Is Not Identical To Xbox Series X Or PS5
Usman Pirzada9 hours ago
AMD-Radeon-RX-Big-Navi-GPU-Based-Graphics-Card_1-740x280.jpg

Architectural reveals of the next-generation consoles have had enthusiasts discussing AMD's upcoming RDNA2 architecture with much gusto and has started the inevitable speculation on how RDNA2 will translate into next-generation Radeon graphics cards. Unfortunately, or fortunately, a source intimately familiar with the matter tells me that AMD's RDNA2 implementation for its upcoming graphics cards is not comparable to the next generation console's at all.

RDNA2 for AMD Radeon graphics cards not the same as next-gen consoles 'custom RDNA2'
There is a very good reason why both the PS5 and Xbox Series X have listed the architecture used as "custom RDNA2" instead of just "RDNA2". This is a point that AMD wanted to drive home and specifically insisted on. They do not want gamers confusing the RDNA2 used in the XSX and PS5 with that of the upcoming Radeon cards, therefore the "custom" prefix. To be clear, my source did not say how it would be different, but I did feel the implication was that AMD's RDNA2 implementation for Radeon GPUs will be superior.

Here is what I do know:

  • AMD's RDNA 2 Next-Generation GPU architecture for Radeon is notthe same as the XSX or the Sony PS5. The semi-custom implementation is a different architecture as the one soon to be featured in the upcoming Radeon GPUs.
What I do not know right now:

  • Difference between the implementation of RDNA 2 in the XSX and PS5.
  • Difference between the RDNA2 implementation in next-gen consoles and in upcoming Radeon cards. (While my source did not explicitly state as such, the implication in the conversation was that the Radeon-based RDNA2 will be superior.)
AMD's upcoming Big Navi GPUs were originally expected to land by the end of Q2 but it remains to be seen whether the ongoing Coronavirus crisis will have any impact. The company has already revealed however that the upcoming RDNA2 based GPUs will feature support for Direct X 12 Ultimate APU - which should give quite an edge to these graphics cards. DX12 Ultimate, like other DirectX APIs before it requires a certain feature level to be achieved (the last feature level to be released was 12_1 so we are likely looking at hardware that corresponds to 12_2 or above. [/exclusive]

Rumored so far: AMD's Big Navi GPUs spotted in EEC filings

What are thought to be AMD's upcoming Big Navi GPUs have previously been spotted in EEC filings. Keep in mind however that AMD can change the nomenclature and the only thing these filings tell us are the number of variants.

The EEC filing mentions four new GPUs: Radeon RX 5950XT (the flagship), Radeon RX 5950, Radeon RX 5900 and the Radeon RX 5800. This means we can expect at the very least one more Navi GPU with three variants each. It is entirely possible that this is the Big Navi that was promised to us almost 2.5 years ago and the one that will deliver to AMD fans the high-end card they have been waiting for. On the other hand, it is also possible that this is a new die that the company prepared and while these graphics cards will obviously more powerful than the RX 5800 series, it is still not Big Navi.

Without any further ado, here is a screenshot of the EEC filing submitted by AFOX corporation - an add-in board partner for AMD Radeon.



Nomenclature would dictate that the RX 5950 XT be significantly more powerful than any of its younger siblings (like the already-released RX 5700) and going by the steps involved, we can easily see that this is going to be quite a powerful card.

RDNA 2 is RDNA 2, are there different flavors and will it be better on higher end GPU's that are more powerful as it has more time to mature? sure but these consoles have RDNA 2 according to AMD.
 
Ask PS3 how that worked for it.

It's not the same.
1st PS3 used two DIFFERENT types of RAM, XDR AND DDR3. Not the same situation no matter how many times you tell yourself because the XSX is using the same type of RAM but with a set of faster and slower modules in a single pool.
CPU has access to the slower 6Gs while the GPU has access to the faster 10Gs on a bus that is WIDER than the PS5s.
 
It reminds me of the FP16 debate and how it would enable the PS4 Pro to double its TF’s.

I know the a Series X has something similar but it’s not going to double its power.

But it is an interesting setup and I’m curious to see the benefits.
I think the break down to fp16 and even down to 4 will matter more this gen because there are so many features dedicated to rendering the screen and its components at different levels of detail. All the VRS, RT, and Geometry engine stuff can is processes things at lower precision and save headroom.

It seems to me that half this new gen is about getting the most out if every process. Its probably why they can so much more out of each "flop". Last gen didn't have these techniques so the lower precision stuff had less impact.
 
What's really bugging me about these next gen console talks is that people are becoming celebrities and gaining cult status on internet...not by talking about next gen possibilities but rather downplaying and being negative about their non-primary option...with nothing more than assumptions made from incomplete information. This is neither Microsoft's nor Sony's first rodeo in the console business. Unlike in the past when they had alternative strategies, both are catering hardcore towards developers. Assumptions that either had some significant over-sight that crippled the balance without even a single developer anecdote (not assumption but actual hands on anecdote) is beyond premature.

There's always limitations, always a weakest component which creates a new bottle-neck and always trade-offs based on cost. We should hold tight on assuming that one of the cheaper components of the machine is somehow going to cripple the most expensive and innovative stuff. That just doesn't make sense. Not that it's not possible...it's just that I'm not going to trust a Youtube fake tech insider over the best engineers in the world before there's a shred of real life evidence.
 
  • Agree
Reactions: Dno69
I think the break down to fp16 and even down to 4 will matter more this gen because there are so many features dedicated to rendering the screen and its components at different levels of detail. All the VRS, RT, and Geometry engine stuff can is processes things at lower precision and save headroom.

It seems to me that half this new gen is about getting the most out if every process. Its probably why they can so much more out of each "flop". Last gen didn't have these techniques so the lower precision stuff had less impact.

Agree. A lot of calculations don't need to be nearly as precise and new features allow them to separate those tasks better which will now allow for more efficient use of lower precision calculations.
 
  • Like
Reactions: Frozpot
Twitter Explanation

This guy tries to explain the memory thing. It doesn't matter who's memory is "better" if it's not a bottleneck in either set-up. At this point, due to new solutions with SSDs and efficiencies, there's no evidence either will be bottle-necked. This isn't you PS3 memory. Not close. Two different speeds run in parallel from the same pool. CPU tasks and OS run off the lower speed.
 
It's not the same.
1st PS3 used two DIFFERENT types of RAM, XDR AND DDR3. Not the same situation no matter how many times you tell yourself because the XSX is using the same type of RAM but with a set of faster and slower modules in a single pool.
CPU has access to the slower 6Gs while the GPU has access to the faster 10Gs on a bus that is WIDER than the PS5s.
A 16gb @448gb/s is superior to 10gb @560gb/s setup
 
A 16gb @448gb/s is superior to 10gb @560gb/s setup

I mean when I'm presented with facts I usually try to have a counter point to what the original statement was...lol.
You said the PS3 ram situation is the same. I provided proof that it wasn't.

It will be okay Val. You can't be right all the time. The world will go on.
🙂
 
I mean when I'm presented with facts I usually try to have a counter point to what the original statement was...lol.
You said the PS3 ram situation is the same. I provided proof that it wasn't.

It will be okay Val. You can't be right all the time. The world will go on.
🙂
My original statement said exactly the same.
I have spun away from that and said not exactly the same ;)
Both setups run the ram at different speeds which is a bad thing.
 
Val is obviously practicing to be a politician. Whenever he gets cornered on facts, keeps repeating his statement with the same incomplete, inaccurate support as a counter argument. Eventually everyone gets tired and moves to the next topic. Val wins the internet! Since it's not actually impacting my life in any real ways, I think it's ok to let Val have this.

Val thanks for giving me someone I can practice arguing with when I need someone to take the impossible viewpoint! I'm older, tire faster and am moving to the next topic. Feel free to celebrate!
 
  • Agree
Reactions: Swede
Now that most of us understand that memory bandwidth is not a bottleneck of the Xbox Series X, let's talk about the thing that everyone suddenly forgot about. The CPU. It's very similar in both machines. Will be similar in the Lockhart. PC gamers have had their CPUs under-used by devs for years. Now 3rd party devs can develop towards it. Massive CPU jumps combined with offloading of CPU heavy tasks in both machines means a ish-ton of CPU compute for things like physics, AI, environmental interaction, better collision and therefore more realistic animations.

When we look back on this gen, it will be defined by the CPU and greatly improved I/O speeds. Going to result in gameplay enhancements across the board. Not just about visuals this time.
 
In all seriousness my original post still stands....


When PS5 has better looking/more detailed games and XSX has more higher resolution games we will see I was correct.
 
  • Scared
Reactions: Swede
Status
Not open for further replies.