The Xbox One Hardware Information Thread

Except there is.

Mrx was pulling stuff out of his butt. It was obvious to anyone. Tiled Resources is a real thing. The question is, what does it mean. People with some knowledge of it are trying to explain what they think. There is a legitimate debate. You, with absolutely zero knowledge of it except that you have a bug up your ass about it, and can't seem to allow any positivity surrounding the xbox, have nothing to offer that debate.

If anyone is making claims that TR means the XB1 will run circles around PS4, they're wrong. But the technology itself is not secret sauce, and nothing you can say will make it such.
 
If someone told me the same thing was in a PS4, I would still call it special sauce. It's strung together with conspiracy theories based on dev quotes, misunderstanding MS tech presentations, attacking AMD engineers and patch notes, hand waving away any requests for evidence, and claims of "PS4 being at a significant disadvantage".

There's almost no difference between these claims and something dredged up from misterx's blog. Same amount of evidence, same plausibility, same rationale and logic. The only diff. is some of you are bizarrely taking it seriously, just as a reaction to someone you dislike using logic, reason, skepticism, and evidence to call a spade a spade.

And once again, tiled texture techniques gives devs more effective memory size and bandwidth, plus some side benefits like cheaper anisotropic filtering and better LoD scaling. But it doesn't make the GPU any better at actually rendering things. You can bring a GPU to its knees with a 4k size texture just by rendering 5 billion polys with that texture applied to it. It's not some magical thing that will double a system's effective TF count.

If there is a difference in how the two consoles handle tiled texturing tasks, it's likely to be small imo.

At this point I feel a bit like you are suggesting that there is no possible improvement to bubble sort except throwing more hardware and compute units at it. Because it is becoming apparent that that not only do you not understand the problem set (which is ok it is really hard stuff) but you are not interested in learning about it either(which suggests a lack of curiosity and sub optimal intelligence for the problem set at hand).

Just because MisterX talked about Tiled Resources does not disqualify it. There are plenty of Academic papers on Tiled Rendering and virtual textures. There are many industry papers as well about what hardware to optimized tiled rendering would look like.

I don't think you have even separated Tiled Rendering and virtual textures as different concepts in your head yet.
 
If someone told me the same thing was in a PS4, I would still call it special sauce. It's strung together with conspiracy theories based on dev quotes, misunderstanding MS tech presentations, attacking AMD engineers and patch notes, hand waving away any requests for evidence, and claims of "PS4 being at a significant disadvantage".

There's almost no difference between these claims and something dredged up from misterx's blog. Same amount of evidence, same plausibility, same rationale and logic. The only diff. is some of you are bizarrely taking it seriously, just as a reaction to someone you dislike using logic, reason, skepticism, and evidence to call a spade a spade.

And once again, tiled texture techniques gives devs more effective memory size and bandwidth, plus some side benefits like cheaper anisotropic filtering and better LoD scaling. But it doesn't make the GPU any better at actually rendering things. You can bring a GPU to its knees with a 4k size texture just by rendering 5 billion polys with that texture applied to it. It's not some magical thing that will double a system's effective TF count.

If there is a difference in how the two consoles handle tiled texturing tasks, it's likely to be small imo.

We take seriously reliable information, like what came from MS's dev docs, their HotChips presentation, their patents, and their DF interview. If you take issue with the reliability of those sources, you need to actually make a case for what your contention is. We take them at face value because the claims made mesh thoroughly with what we know already about the subjects being discussed, and when we get surprising new info it is always explained well (like how the CPU is the most limiting factor in determining framerates in modern games, devs at B3d noted this beforehand too, so it's not just magical PR).

What you are seemingly trying to address is the tiled rendering patent I've noted. I've no clue what misterx is claiming these days. Never read his blog or whatever it is and don't care to. Read the patent. The gist is that with a low latency mem pool like the eSRAM and display planes that are partitioning the image plane into quadrants, you can exploit the method for processing tiled assets in a unique way that dramatically improves rendering efficiency and saves tons of bandwidth by avoiding copying operations that you otherwise need for tiled rendering. It improves rendering efficiency by tiling depth first instead of breadth first, meaning they can leverage the low latency eSRAM to sample multiple planes deep for a single tile and then proceed processing the other tiles one at a time.

With GDDR5 high latency mem you can't do this without wasting TONS of GPU cycles where your GPU is sitting idle doing nothing. With low latency memory holding your tiled assets you can keep sampling rapidly to keep the GPU chewing through these tile sets almost continuously. The typical approach to a GPU handling tiled assets (like sparsely partitioned textures, for instance) requires your GPU to process things across a single plane, then you do the next plane, then the other planes until you are done with them, then you composite all of those together. When you have high latency memory storing your tiled assets you have to hide that latency by bringing in huge chunks of data to work on as a group, hence your GPU works on the planes one at a time (lots of tiles) instead of sampling tile by tile which stalls your GPU.

The result is with low latency memory and some buffer to store your output in localized regions of the screen (i.e. display planes that are broken into quadrants) you can do depth first tiled rendering effectively and that's a LOT more efficiency time-wise than breadth first tiled rendering, which is the best available option for a machine using higher latency mem like PS4. So yes, it really would significantly improve the real world effective output on a "flops" basis in the sense that it becomes a tortoise and the hate scenario where on PS4 your tiled assets are rendered quickly with tons of stalls between each major set of ops while on X1 it renders slightly slower (not much though) but does so without long stalls so within a frame you will be able to get more done with the "slower" GPU.



What you need to understand is that you are literally asserting that MS's engineers, their dev docs, their patents, and what actual devs at B3d say is all bogus and to be conflated with misterx's conspiracy theories. You want us to dismiss ALL of the reliable info in those sources. Why? Because you discovered that you can easily claim they represent 'secret sauce', and that allows you Sony fanboys to dismiss any and all discussion totally out of hand on issues you fear make X1 look more capable than you and the ignorant hivemind had assumed.
 
iboUVgte4IWW2E.gif
 
We take seriously reliable information, like what came from MS's dev docs, their HotChips presentation, their patents, and their DF interview. If you take issue with the reliability of those sources, you need to actually make a case for what your contention is. We take them at face value because the claims made mesh thoroughly with what we know already about the subjects being discussed, and when we get surprising new info it is always explained well (like how the CPU is the most limiting factor in determining framerates in modern games, devs at B3d noted this beforehand too, so it's not just magical PR).

What you are seemingly trying to address is the tiled rendering patent I've noted. I've no clue what misterx is claiming these days. Never read his blog or whatever it is and don't care to. Read the patent. The gist is that with a low latency mem pool like the eSRAM and display planes that are partitioning the image plane into quadrants, you can exploit the method for processing tiled assets in a unique way that dramatically improves rendering efficiency and saves tons of bandwidth by avoiding copying operations that you otherwise need for tiled rendering. It improves rendering efficiency by tiling depth first instead of breadth first, meaning they can leverage the low latency eSRAM to sample multiple planes deep for a single tile and then proceed processing the other tiles one at a time.

With GDDR5 high latency mem you can't do this without wasting TONS of GPU cycles where your GPU is sitting idle doing nothing. With low latency memory holding your tiled assets you can keep sampling rapidly to keep the GPU chewing through these tile sets almost continuously. The typical approach to a GPU handling tiled assets (like sparsely partitioned textures, for instance) requires your GPU to process things across a single plane, then you do the next plane, then the other planes until you are done with them, then you composite all of those together. When you have high latency memory storing your tiled assets you have to hide that latency by bringing in huge chunks of data to work on as a group, hence your GPU works on the planes one at a time (lots of tiles) instead of sampling tile by tile which stalls your GPU.

The result is with low latency memory and some buffer to store your output in localized regions of the screen (i.e. display planes that are broken into quadrants) you can do depth first tiled rendering effectively and that's a LOT more efficiency time-wise than breadth first tiled rendering, which is the best available option for a machine using higher latency mem like PS4. So yes, it really would significantly improve the real world effective output on a "flops" basis in the sense that it becomes a tortoise and the hate scenario where on PS4 your tiled assets are rendered quickly with tons of stalls between each major set of ops while on X1 it renders slightly slower (not much though) but does so without long stalls so within a frame you will be able to get more done with the "slower" GPU.



What you need to understand is that you are literally asserting that MS's engineers, their dev docs, their patents, and what actual devs at B3d say is all bogus and to be conflated with misterx's conspiracy theories. You want us to dismiss ALL of the reliable info in those sources. Why? Because you discovered that you can easily claim they represent 'secret sauce', and that allows you Sony fanboys to dismiss any and all discussion totally out of hand on issues you fear make X1 look more capable than you and the ignorant hivemind had assumed.

And.........

Breathe.......

(derp)....

:)
 
Just picked up my new One controller and I'll say this I absolutely love it. I was a huge DS2 fan until the 360 controller and all MS did was tweak and not mess with success. Everything from the D-Pad to the triggers feel IMO perfect. All things considered MS should be proud of the One Controller.
 
What Mr. X does is take real things and then makes science fiction out of them. Like he takes the idea of 4 display planes and then multiplies the X1 Tflops by 4 to equate to 5.2 Tflops. It's goofy crap like that. He's now talking about taking X-rays of the X1 GPU to find the hidden power/hardware.

The stuff that XboxNeo and Astrograd are talking about are real things that MS and reputable tech websites have talked about. No doubt that MisterX has read and watched all the same stuff...and is now taking everything way out of context and dreaming up impossible conclusions.

It does not de-legitimize the real impact these features could have this gen. Every time someone mentions some new MS tech/API feature, dedicated Gaffers use words like "secret sauce" and "Mister X" to de-legitimize them. That's why we can tell where Consolewarz gets his information.
 
  • Like
Reactions: Bozzert
What Mr. X does is take real things and then makes science fiction out of them. Like he takes the idea of 4 display planes and then multiplies the X1 Tflops by 4 to equate to 5.2 Tflops. It's goofy crap like that. He's now talking about taking X-rays of the X1 GPU to find the hidden power/hardware.

The stuff that XboxNeo and Astrograd are talking about are real things that MS and reputable tech websites have talked about. No doubt that MisterX has read and watched all the same stuff...and is now taking everything way out of context and dreaming up impossible conclusions.

It does not de-legitimize the real impact these features could have this gen. Every time someone mentions some new MS tech/API feature, dedicated Gaffers use words like "secret sauce" and "Mister X" to de-legitimize them. That's why we can tell where Consolewarz gets his information.
Just need to ignore them, especially if they come from "there"!
 
I think MS is embedding secret hardware in the plastic case. Even the fan blades are covered in circuitry. Only an X-ray machine can get to the bottom of this. Keep digging!

I worry for mrx-'s sanity. I wonder what mrsx thinks.
 
Didn't they just xray PS4's SOC to see it? why wouldn't they do the same to the X1's?

they are going to do it and I will have my popcorn ready for the mixterxmedia blog when there is no 16cus, dpu, flux capacitor, PowerPC cpu, dual 7970 equivalent gpu etc etc.

the back pedaling from the "insiders" and the anger from the retards that actually believed it should be pretty entertaining.
 
  • Like
Reactions: Cody32599
they are going to do it and I will have my popcorn ready for the mixterxmedia blog when there is no 16cus, dpu, flux capacitor, PowerPC cpu, dual 7970 equivalent gpu etc etc.

the back pedaling from the "insiders" and the anger from the retards that actually believed it should be pretty entertaining.
Sorry to burst your bubble, but the insiders there are already preparing people to not trust what comes from Chipworks' x-ray analysis. That it will be deceptively wrong.

;)
 
  • Like
Reactions: Cody32599
Sorry to burst your bubble, but the insiders there are already preparing people to not trust what comes from Chipworks' x-ray analysis. That it will be deceptively wrong.

;)

I still expect fireworks. The kids there were already turning on one another when the stuff that was suppose to be announced in October didn't happen.
 
Andrew Goossen: I just wanted to jump in from a software perspective. This controversy is rather surprising to me, especially when you view ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it. We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM.

Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM. The Xbox 360 was the easiest console platform to develop for, it wasn't that hard for our developers to adapt to eDRAM, but there were a number of places where we said, "Gosh, it would sure be nice if an entire render target didn't have to live in eDRAM," and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3 so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go

Everytime I read that article, I get excited.
 
  • Like
Reactions: Ascalaphus
With GDDR5 high latency mem you can't do this without wasting TONS of GPU cycles where your GPU is sitting idle doing nothing. With low latency memory holding your tiled assets you can keep sampling rapidly to keep the GPU chewing through these tile sets almost continuously. The typical approach to a GPU handling tiled assets (like sparsely partitioned textures, for instance) requires your GPU to process things across a single plane, then you do the next plane, then the other planes until you are done with them, then you composite all of those together. When you have high latency memory storing your tiled assets you have to hide that latency by bringing in huge chunks of data to work on as a group, hence your GPU works on the planes one at a time (lots of tiles) instead of sampling tile by tile which stalls your GPU.

GPU's already inherently deal with high latency fine, as they are designed too after all, that combined with the fact that GDDR5 isn't actually higher latency then say DDR3 should point you to the conclusion that this won't really be a issue. Also the GPU is always going to work on large amounts of data at a time, this is evidenced by the huge latency of the caches (remember the discussion we had about this), you can't get around the cache latency.
 
Know what's amazing?

3 OS', background features, connectivity and an HD game.


Yet, it's underpowered? ROFLMAO
 
Know what's amazing?

3 OS', background features, connectivity and an HD game.


Yet, it's underpowered? ROFLMAO
The XO is far from underpowered. In fact, in some ways its more powerful. We know this. So do the Sony fanboys. Its just going to be a reaaallly tough pill for them to swallow due to the emotional income they receive when speaking on the mighty Sony. :D But hey, even in this day and age a gaming console can still find its fans. Good on 'em
 
The XO is far from underpowered. In fact, in some ways its more powerful. We know this. So do the Sony fanboys. Its just going to be a reaaallly tough pill for them to swallow due to the emotional income they receive when speaking on the mighty Sony. :D But hey, even in this day and age a gaming console can still find its fans. Good on 'em

eh? this is like saying every console is 'in some way more powerful' then its competitors, it just happens that this gen doesn't have any real secret sauce to speak off and as such PC's are probably already strides ahead.
 
GPU's already inherently deal with high latency fine...

No. They hide latency by chewing on large data sets all at once (hence they use high bandwidth to facilitate that). That is done by design on the software end by the devs designing the game.

...that combined with the fact that GDDR5 isn't actually higher latency then say DDR3 should point you to the conclusion that this won't really be a issue.

It's an enormous issue if we are comparing a depth first approach to tiled rendering on the two machines. Had you read my posts you'd notice that we are looking at GDDR5 vs the eSRAM btw, not DDR3. You trade low latency for bandwidth unless you use embedded memory (which gives you both). You simply cannot do depth first tiled rendering efficiently at all with GDDR5 as your memory pool. Your GPU will sit idle twiddling its thumbs waiting for the next tile to be brought in and it will do so between every single tile you process due to the high latency.

Using eSRAM with low latency you don't have that problem as the time intervals waiting to bring in the next tile is small so your GPU is working almost continuously without notable stalls. Read the patent for more info.

Also the GPU is always going to work on large amounts of data at a time...

It has to in order to be remotely efficient. That's the limitation of the high bandwidth external memory...it forces your software design to focus on high bandwidth in order to get things done as opposed to alternatives. Stuff like AI and physics, for instance, are typically poor fits for GPU's because of this. You can only move over small chunks of their code that fit the bill for highly repetitive and predictable tasks and everything else has to stay CPU-side (since it is very sensitive to latency).

This is MS's bet on the GPGPU stuff...having more parallelism (i.e. a few more CU's) wastes lots of computing power as those must be general purpose by design and the tasks you put under their belt are going to be highly specified tasks. Sony just threw more CU's at the issue. MS is betting on those tasks working better with lower latency memory which typically suites that code base better (physics, AI).
 
eh? this is like saying every console is 'in some way more powerful' then its competitors, it just happens that this gen doesn't have any real secret sauce to speak off and as such PC's are probably already strides ahead.

I dont know about any secret sauce mate and I dont care. What I do know is that the XO isnt underpowered AT ALL.
If we were to take RYSE as an example it becomes increasingly difficult to believe that the PS4 is more powerful as there isnt a game on it right now that looks better than RYSE. Not one. I don't know....you be the judge....:cool:
 
The XO is far from underpowered. In fact, in some ways its more powerful. We know this. So do the Sony fanboys. Its just going to be a reaaallly tough pill for them to swallow due to the emotional income they receive when speaking on the mighty Sony. :D But hey, even in this day and age a gaming console can still find its fans. Good on 'em

that's some serious delusion.
 
We're talking about a closed environment entertainment console. I wouldn't call that underpowered by any stretch.

in a closed environment, according to Carmack, you are looking at twice the performance. that's 2.6tflop real world performance.

Still under powered when pcs are already in the 5tflop world.

and before you say price and bla bla bla bla, the 360 had a gpu equivalent to a $500 pc gpu at launch.

So, yes, underpowered.
 
in a closed environment, according to Carmack, you are looking at twice the performance. that's 2.6tflop real world performance.

Still under powered when pcs are already in the 5tflop world.

and before you say price and bla bla bla bla, the 360 had a gpu equivalent to a $500 pc gpu at launch.

So, yes, underpowered.

Maybe you have me confused with someone that gets involved in the console vs pc sh!t. But you're seriously mistaken. I don't need to stroke the PC to know it's superior in gaming. On the same hand, you're thinking too hard in trying to compare. Regardless, it's doing everything a htpc is doing and more. Which is completely surprising considering it's lackluster architecture.

Your bait's still in the water.
 
Maybe you have me confused with someone that gets involved in the console vs pc sh!t. But you're seriously mistaken. I don't need to stroke the PC to know it's superior in gaming. On the same hand, you're thinking too hard in trying to compare. Regardless, it's doing everything a htpc is doing and more. Which is completely surprising considering it's lackluster architecture.

Your bait's still in the water.

thinking too hard in trying to compare? Um simply comparing Microsoft philosophy with the 360 v Microsoft philosophy with the xb1 is not "thinking hard" because the difference is so obvious.

if that's how you feel then maybe you should stay out of the hardware thread?

And you are mistaken about troll baiting.