Official Thread XBOX Hardware

My Current Console Is....


  • Total voters
    37
Status
Not open for further replies.
  • Agree
Reactions: Rollins
I mean, it literally says based on incomplete info. It's going to be custom and designed to work with a killer GPU. I'm not too worried

They do want to target FPS next gen so i doubt they'll go half assed with the CPU, but they actually might do if they want to offset the cost of GPU and dedicated chips.

The rumor holds about as much substance as all the PS5 rumors swirling around.
 
  • Like
Reactions: Frozpot
Pretty sure 12 TF is 12 TF just like 2 + 2=4.

No, Vega 64 is 12 tflops and it's considerably weaker than the 9.75 tflop RX 5700XT.

would it be 16tflops if going by the RDNA2?

Yeah, could be 15tflops converted to GCN. It would be around 14 converting from RDNA1, 20% more efficient.

It's confirmed that 12 tflops is raw RDNA 2 performance and not taken from converting the number to GCN tflops.

maybe 18 evo am cry

That's way too much.


Like a 1600 is disappointing if true.

Well, he said between a 1600 and a 1600AF. The 1600AF performance is very close to a 2600 at stock speeds. They are both Zen+ 12nm.

Plus, that is the dev kit. The reason why it's only a 1600/2600/3600 is because they are likely getting developers used to the idea of only having 6 cores and 12 threads available. 2 cores are probably reserved for the OS. On the PC side Windows overhead takes away from PC performance too, it's just that MS reserves the cores on the Xbox instead of letting the performance fluctuate.

That's 6 cores (12 threads) 3.5 ghz versus 6 cores (6 threads) (Xbox One also reserved 2 cores for the OS) at 1.75ghz. That's a huge leap. Those threads keep the CPU from getting saturated making everything more responsive.

I doubt that would hamstring developers all that much even with next gen physics on the way. On the PC side, today, that would only matter if you are trying to play at frames higher than 90.
 
Last edited:
No, Vega 64 is 12 tflops and it's considerably weaker than the 9.75 tflop RX 5700XT.



Yeah, could be 15tflops converted to GCN. It would be around 14 converting from RDNA1, 20% more efficient.

It's confirmed that 12 tflops is raw RDNA 2 performance and not taken from converting the number to GCN tflops.
...
That's 6 cores (12 threads) 3.5 ghz versus 6 cores (6 threads) (Xbox One also reserved 2 cores for the OS) at 1.75ghz. That's a huge leap. Those threads keep the CPU from getting saturated making everything more responsive

A teraflop is just a teraflop, which is a trillion floating point operations per second. All teraflops measurements are the same, assuming the same range of float (i.e., 32 bit vs 16 bit, which was the nonsense people bought into for the Pro). If all a card did was do fp ops, that would be the only number that matters (like for Bitcoin mining if that is still a thing). But the other stuff matters too. It's like horsepower : 500 hp is 500 hp, but it does a lot more on a sedan than it would on a jumbo jet.

On the cpu, the clock is nice but it's even more about the other stuff. Twice the threads is huge, and the fact that they're first class processors instead of heavily modded speak and spells is even better.
 
  • Like
  • Informative
Reactions: Frozpot and Swede
No, Vega 64 is 12 tflops and it's considerably weaker than the 9.75 tflop RX 5700XT.



Yeah, could be 15tflops converted to GCN. It would be around 14 converting from RDNA1, 20% more efficient.

It's confirmed that 12 tflops is raw RDNA 2 performance and not taken from converting the number to GCN tflops.



That's way too much.



Well, he said between a 1600 and a 1600AF. The 1600AF performance is very close to a 2600 at stock speeds. They are both Zen+ 12nm.

Plus, that is the dev kit. The reason why it's only a 1600/2600/3600 is because they are likely getting developers used to the idea of only having 6 cores and 12 threads available. 2 cores are probably reserved for the OS. On the PC side Windows overhead takes away from PC performance too, it's just that MS reserves the cores on the Xbox instead of letting the performance fluctuate.

That's 6 cores (12 threads) 3.5 ghz versus 6 cores (6 threads) (Xbox One also reserved 2 cores for the OS) at 1.75ghz. That's a huge leap. Those threads keep the CPU from getting saturated making everything more responsive.

I doubt that would hamstring developers all that much even with next gen physics on the way. On the PC side, today, that would only matter if you are trying to play at frames higher than 90.

There is a reason why Vega was being beaten by comparable Nvidia cards with much less SP's and is the reason why Navi is faster with much less SP's and it is because Vega's GCN cores were very under utilized. GCN was always a computer monster but it never translated to games as well.

"All of this leads to one of RDNA's chief improvements. Rather than use a SIMD16 with a four-clock issue, RDNA uses dual SIMD32s with a single-clock issue, meaning that the Compute Unit can be kept better utilised for gaming code. Oversimplifying it a lot, RDNA effectively turns the compute-centric GCN architecture into a more gamer code-friendly one. You can run an entire Wavefront32 - the smallest unit of execution, half the width of Wave64 on GCN - on a SIMD32 in one cycle".

Source: https://hexus.net/tech/news/graphics/131555-the-architecture-behind-amds-rdna-navi-gpus/
 
A teraflop is just a teraflop, which is a trillion floating point operations per second. All teraflops measurements are the same, assuming the same range of float (i.e., 32 bit vs 16 bit, which was the nonsense people bought into for the Pro). If all a card did was do fp ops, that would be the only number that matters (like for Bitcoin mining if that is still a thing). But the other stuff matters too. It's like horsepower : 500 hp is 500 hp, but it does a lot more on a sedan than it would on a jumbo jet.

On the cpu, the clock is nice but it's even more about the other stuff. Twice the threads is huge, and the fact that they're first class processors instead of heavily modded speak and spells is even better.

The CPU arch is what really makes the difference, not clock speeds or threads. Zen CPU's weren't primarily designed as a tablet/netbook/ULP laptop CPU like Jaguar was, Zen was designed as AMD's high performance desktop CPU first and foremost.
 

Like a 1600 is disappointing if true.

If true, still magnitudes faster than Jaguar.

They're not going to put a full desktop class 3700X in a console APU, seeing as the die space required would not fit within their budget. Remember that the CPU die and the I/O die are separate on Zen 2. We'll prolly see something closer to a laptop Zen 2 performance wise.
 
From the MS announcement:


Less latency and input lag sound great but I'm not seeing a ton of that now with my TV so... They sound like nice to haves in my mind but I'm really hesitant to upgrade my $2K TV for another 5 or so years :).
I am usually one to upgrade my TV every two years. But I am done doing that now. I love my Sony TV that I bought a year and a half ago. I plan on sticking with it as I don't need 8K right now, and those features you mentioned are not an issue for me either. Thanks for posting that.
 
I am usually one to upgrade my TV every two years. But I am done doing that now. I love my Sony TV that I bought a year and a half ago. I plan on sticking with it as I don't need 8K right now, and those features you mentioned are not an issue for me either. Thanks for posting that.
I'm with you . My main TV is still my Samsung KS8500 curved screen from 2016. Are there better TVs? Sure, but it still looks fantastic and is low latency with around 1500 nits HDR. There is no way I can justify to myself buying a replacement. I've bought other 4k tvs (lower end, newer Sammy and an LG with horrendous edge-lighting issues) since, and they don't touch it. It's a great TV.
 
You guys will be good for 4K/60/HDR 10

You won't get HDMI 2.1's 4K/120/1440p/120/Dolby Vision/ALLM/VRR/eARC
That's fine for me for a while tbh, though I would love to have a Dolby Vision capable TV. Variable framerates would be good too, but I don't know how many TVs support that. It's more matter of price/ performance, and I just am not seeing enough reason just yet. 120 fps may be it, though. I'd have to play. I know that I don't care at all about 8k, that's for certain...
 
  • Agree
Reactions: karmakid and Kvally
I don't think Dolby Vision is tied to HDMI 2.1. Alot of HDMI 2.0 tvs will accept a Dolby Vision signal.

Dolby Vision does not require HDMI 2.0a or 2.1. It embeds the metadata into video signal. Knowing that previous versions of HDMI would not pass the Dolby Vision dynamic metadata, Dolby developed a way to carry this dynamic metadata across HDMI interfaces as far back as v1.4b. The HDMI specification is now catching up with v2.0a supporting static metadata and future versions expected to support dynamic metadata as well. Dolby’s intent is not to compete with HDMI but merely to enable deployment of a full Dolby Vision HDR ecosystem without having to wait for HDMI standardization to catch up. Dolby was and is directly involved in standardization of the current and future versions of HDMI.


 
  • Informative
  • Agree
Reactions: GordoSan and Kvally
I don't think Dolby Vision is tied to HDMI 2.1. Alot of HDMI 2.0 tvs will accept a Dolby Vision signal.




I wasn't saying it was...
I pointed out what those 2 could get as neither have DV capable or HDMI 2.1 displays.
Frozpot I didn't post the 8K part... That's another topic altogether. ;)
 
  • Like
Reactions: Frozpot
The interpolation stuff might be legit and may be the reason Klobrille thought there was validity to the rumor. Microsoft is doing some stuff with AI that should impact old backwards compatible games. Whether this is one of those things remains to be seen but I'd expect something pretty significant on the AI/machine learning front. We're already seeing individuals figure out how to do crazy ish on the PC with old games thanks to AI. Google has some crazy AI demos. Microsoft, the megacorp who sells itself as one of the top 3 AI companies in the world, should be expected to apply that to gaming.
 
The interpolation stuff might be legit and may be the reason Klobrille thought there was validity to the rumor. Microsoft is doing some stuff with AI that should impact old backwards compatible games. Whether this is one of those things remains to be seen but I'd expect something pretty significant on the AI/machine learning front. We're already seeing individuals figure out how to do crazy ish on the PC with old games thanks to AI. Google has some crazy AI demos. Microsoft, the megacorp who sells itself as one of the top 3 AI companies in the world, should be expected to apply that to gaming.

Well said. I’ve heard some vague rumors about this, but nothing strong enough to post about...yet.
 
Well I was correct in my assessment of MS's initial XSX announcement.
XSX is both 12 TFs AND RDNA2!
Just having decent English comprehension got me there.
When doing the math XSX is 2 times the XB1X. 6TF X 2 = 12TFs. Everyone trying to spin it as less than 12 RDNA TFs has poor English comprehension skills. There is no way to twist doing math.
MS describing XSX's GPU as Next Gen RDNA and AMD stating in one of their slides that Next Gen RDNA = RDNA2 at 7nm+ process node made it obvious XSX was using RDNA2.

What I find interesting are the rumors that MS developed the Ray Tracing in AMD's RDNA2 Architecture. If this is true its possible only XSX will be using RDNA2.
Sony may have their own custom Ray Tracing and VRS solutions for PS5 if that's the case.
 
  • Agree
Reactions: Swede
Status
Not open for further replies.