I dont know what Im talking about, but Im just going to keep talking!
Except there is.
If someone told me the same thing was in a PS4, I would still call it special sauce. It's strung together with conspiracy theories based on dev quotes, misunderstanding MS tech presentations, attacking AMD engineers and patch notes, hand waving away any requests for evidence, and claims of "PS4 being at a significant disadvantage".
There's almost no difference between these claims and something dredged up from misterx's blog. Same amount of evidence, same plausibility, same rationale and logic. The only diff. is some of you are bizarrely taking it seriously, just as a reaction to someone you dislike using logic, reason, skepticism, and evidence to call a spade a spade.
And once again, tiled texture techniques gives devs more effective memory size and bandwidth, plus some side benefits like cheaper anisotropic filtering and better LoD scaling. But it doesn't make the GPU any better at actually rendering things. You can bring a GPU to its knees with a 4k size texture just by rendering 5 billion polys with that texture applied to it. It's not some magical thing that will double a system's effective TF count.
If there is a difference in how the two consoles handle tiled texturing tasks, it's likely to be small imo.
If someone told me the same thing was in a PS4, I would still call it special sauce. It's strung together with conspiracy theories based on dev quotes, misunderstanding MS tech presentations, attacking AMD engineers and patch notes, hand waving away any requests for evidence, and claims of "PS4 being at a significant disadvantage".
There's almost no difference between these claims and something dredged up from misterx's blog. Same amount of evidence, same plausibility, same rationale and logic. The only diff. is some of you are bizarrely taking it seriously, just as a reaction to someone you dislike using logic, reason, skepticism, and evidence to call a spade a spade.
And once again, tiled texture techniques gives devs more effective memory size and bandwidth, plus some side benefits like cheaper anisotropic filtering and better LoD scaling. But it doesn't make the GPU any better at actually rendering things. You can bring a GPU to its knees with a 4k size texture just by rendering 5 billion polys with that texture applied to it. It's not some magical thing that will double a system's effective TF count.
If there is a difference in how the two consoles handle tiled texturing tasks, it's likely to be small imo.
We take seriously reliable information, like what came from MS's dev docs, their HotChips presentation, their patents, and their DF interview. If you take issue with the reliability of those sources, you need to actually make a case for what your contention is. We take them at face value because the claims made mesh thoroughly with what we know already about the subjects being discussed, and when we get surprising new info it is always explained well (like how the CPU is the most limiting factor in determining framerates in modern games, devs at B3d noted this beforehand too, so it's not just magical PR).
What you are seemingly trying to address is the tiled rendering patent I've noted. I've no clue what misterx is claiming these days. Never read his blog or whatever it is and don't care to. Read the patent. The gist is that with a low latency mem pool like the eSRAM and display planes that are partitioning the image plane into quadrants, you can exploit the method for processing tiled assets in a unique way that dramatically improves rendering efficiency and saves tons of bandwidth by avoiding copying operations that you otherwise need for tiled rendering. It improves rendering efficiency by tiling depth first instead of breadth first, meaning they can leverage the low latency eSRAM to sample multiple planes deep for a single tile and then proceed processing the other tiles one at a time.
With GDDR5 high latency mem you can't do this without wasting TONS of GPU cycles where your GPU is sitting idle doing nothing. With low latency memory holding your tiled assets you can keep sampling rapidly to keep the GPU chewing through these tile sets almost continuously. The typical approach to a GPU handling tiled assets (like sparsely partitioned textures, for instance) requires your GPU to process things across a single plane, then you do the next plane, then the other planes until you are done with them, then you composite all of those together. When you have high latency memory storing your tiled assets you have to hide that latency by bringing in huge chunks of data to work on as a group, hence your GPU works on the planes one at a time (lots of tiles) instead of sampling tile by tile which stalls your GPU.
The result is with low latency memory and some buffer to store your output in localized regions of the screen (i.e. display planes that are broken into quadrants) you can do depth first tiled rendering effectively and that's a LOT more efficiency time-wise than breadth first tiled rendering, which is the best available option for a machine using higher latency mem like PS4. So yes, it really would significantly improve the real world effective output on a "flops" basis in the sense that it becomes a tortoise and the hate scenario where on PS4 your tiled assets are rendered quickly with tons of stalls between each major set of ops while on X1 it renders slightly slower (not much though) but does so without long stalls so within a frame you will be able to get more done with the "slower" GPU.
What you need to understand is that you are literally asserting that MS's engineers, their dev docs, their patents, and what actual devs at B3d say is all bogus and to be conflated with misterx's conspiracy theories. You want us to dismiss ALL of the reliable info in those sources. Why? Because you discovered that you can easily claim they represent 'secret sauce', and that allows you Sony fanboys to dismiss any and all discussion totally out of hand on issues you fear make X1 look more capable than you and the ignorant hivemind had assumed.
Just need to ignore them, especially if they come from "there"!What Mr. X does is take real things and then makes science fiction out of them. Like he takes the idea of 4 display planes and then multiplies the X1 Tflops by 4 to equate to 5.2 Tflops. It's goofy crap like that. He's now talking about taking X-rays of the X1 GPU to find the hidden power/hardware.
The stuff that XboxNeo and Astrograd are talking about are real things that MS and reputable tech websites have talked about. No doubt that MisterX has read and watched all the same stuff...and is now taking everything way out of context and dreaming up impossible conclusions.
It does not de-legitimize the real impact these features could have this gen. Every time someone mentions some new MS tech/API feature, dedicated Gaffers use words like "secret sauce" and "Mister X" to de-legitimize them. That's why we can tell where Consolewarz gets his information.
Didn't they just xray PS4's SOC to see it? why wouldn't they do the same to the X1's?
Sorry to burst your bubble, but the insiders there are already preparing people to not trust what comes from Chipworks' x-ray analysis. That it will be deceptively wrong.they are going to do it and I will have my popcorn ready for the mixterxmedia blog when there is no 16cus, dpu, flux capacitor, PowerPC cpu, dual 7970 equivalent gpu etc etc.
the back pedaling from the "insiders" and the anger from the retards that actually believed it should be pretty entertaining.
Sorry to burst your bubble, but the insiders there are already preparing people to not trust what comes from Chipworks' x-ray analysis. That it will be deceptively wrong.
Andrew Goossen: I just wanted to jump in from a software perspective. This controversy is rather surprising to me, especially when you view ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it. We had to pull over all of our vertex buffers and all of our textures out of system memory concurrent with going on with render targets, colour, depth, stencil buffers that were in eDRAM.
Of course with Xbox One we're going with a design where ESRAM has the same natural extension that we had with eDRAM on Xbox 360, to have both going concurrently. It's a nice evolution of the Xbox 360 in that we could clean up a lot of the limitations that we had with the eDRAM. The Xbox 360 was the easiest console platform to develop for, it wasn't that hard for our developers to adapt to eDRAM, but there were a number of places where we said, "Gosh, it would sure be nice if an entire render target didn't have to live in eDRAM," and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3 so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go
With GDDR5 high latency mem you can't do this without wasting TONS of GPU cycles where your GPU is sitting idle doing nothing. With low latency memory holding your tiled assets you can keep sampling rapidly to keep the GPU chewing through these tile sets almost continuously. The typical approach to a GPU handling tiled assets (like sparsely partitioned textures, for instance) requires your GPU to process things across a single plane, then you do the next plane, then the other planes until you are done with them, then you composite all of those together. When you have high latency memory storing your tiled assets you have to hide that latency by bringing in huge chunks of data to work on as a group, hence your GPU works on the planes one at a time (lots of tiles) instead of sampling tile by tile which stalls your GPU.
Know what's amazing?
3 OS', background features, connectivity and an HD game.
Yet, it's underpowered? ROFLMAO
It is underpowered. so is the ps4.
The XO is far from underpowered. In fact, in some ways its more powerful. We know this. So do the Sony fanboys. Its just going to be a reaaallly tough pill for them to swallow due to the emotional income they receive when speaking on the mighty Sony. But hey, even in this day and age a gaming console can still find its fans. Good on 'emKnow what's amazing?
3 OS', background features, connectivity and an HD game.
Yet, it's underpowered? ROFLMAO
The XO is far from underpowered. In fact, in some ways its more powerful. We know this. So do the Sony fanboys. Its just going to be a reaaallly tough pill for them to swallow due to the emotional income they receive when speaking on the mighty Sony. But hey, even in this day and age a gaming console can still find its fans. Good on 'em
GPU's already inherently deal with high latency fine...
...that combined with the fact that GDDR5 isn't actually higher latency then say DDR3 should point you to the conclusion that this won't really be a issue.
Also the GPU is always going to work on large amounts of data at a time...
eh? this is like saying every console is 'in some way more powerful' then its competitors, it just happens that this gen doesn't have any real secret sauce to speak off and as such PC's are probably already strides ahead.
The XO is far from underpowered. In fact, in some ways its more powerful. We know this. So do the Sony fanboys. Its just going to be a reaaallly tough pill for them to swallow due to the emotional income they receive when speaking on the mighty Sony. But hey, even in this day and age a gaming console can still find its fans. Good on 'em
It is underpowered. so is the ps4.
that's some serious delusion.
We're talking about a closed environment entertainment console. I wouldn't call that underpowered by any stretch.
in a closed environment, according to Carmack, you are looking at twice the performance. that's 2.6tflop real world performance.
Still under powered when pcs are already in the 5tflop world.
and before you say price and bla bla bla bla, the 360 had a gpu equivalent to a $500 pc gpu at launch.
So, yes, underpowered.
Maybe you have me confused with someone that gets involved in the console vs pc sh!t. But you're seriously mistaken. I don't need to stroke the PC to know it's superior in gaming. On the same hand, you're thinking too hard in trying to compare. Regardless, it's doing everything a htpc is doing and more. Which is completely surprising considering it's lackluster architecture.
Your bait's still in the water.