The Xbox One Hardware Information Thread

thinking too hard in trying to compare? Um simply comparing Microsoft philosophy with the 360 v Microsoft philosophy with the xb1 is not "thinking hard" because the difference is so obvious.

if that's how you feel then maybe you should stay out of the hardware thread?

And you are mistaken about troll baiting.

My comment initially was towards the fact many like to push it off as an underpowered over priced object of plastic. When in reality it's doing much much more. The philosophy has changed, because the market has changed. That's the obvious part.
 
thinking too hard in trying to compare? Um simply comparing Microsoft philosophy with the 360 v Microsoft philosophy with the xb1 is not "thinking hard" because the difference is so obvious.

There is a vast gap between what the internet horde presumes was MS's design philosophy on X1 and what it has turned out to be in reality. In reality their strategy was very much a thorough expansion of what they had with 360, lopped in with some key innovations central to the platform's pillars. It's only obvious to those who actually read facts and insights from the dev docs and engineers themselves instead of clinging to the narrative driven by the mindless drones spreading FUD constantly on forums.
 
No. They hide latency by chewing on large data sets all at once (hence they use high bandwidth to facilitate that). That is done by design on the software end by the devs designing the game.



It's an enormous issue if we are comparing a depth first approach to tiled rendering on the two machines. Had you read my posts you'd notice that we are looking at GDDR5 vs the eSRAM btw, not DDR3. You trade low latency for bandwidth unless you use embedded memory (which gives you both). You simply cannot do depth first tiled rendering efficiently at all with GDDR5 as your memory pool. Your GPU will sit idle twiddling its thumbs waiting for the next tile to be brought in and it will do so between every single tile you process due to the high latency.

Using eSRAM with low latency you don't have that problem as the time intervals waiting to bring in the next tile is small so your GPU is working almost continuously without notable stalls. Read the patent for more info.



It has to in order to be remotely efficient. That's the limitation of the high bandwidth external memory...it forces your software design to focus on high bandwidth in order to get things done as opposed to alternatives. Stuff like AI and physics, for instance, are typically poor fits for GPU's because of this. You can only move over small chunks of their code that fit the bill for highly repetitive and predictable tasks and everything else has to stay CPU-side (since it is very sensitive to latency).

This is MS's bet on the GPGPU stuff...having more parallelism (i.e. a few more CU's) wastes lots of computing power as those must be general purpose by design and the tasks you put under their belt are going to be highly specified tasks. Sony just threw more CU's at the issue. MS is betting on those tasks working better with lower latency memory which typically suites that code base better (physics, AI).

If your cache takes 200-300 cycles and your memory usually takes say 200-300 cycles as well. Then if you change out the memory for a 1 cycle variant you don't end up with a big a difference as you are thinking. It changes the equation to 200-300 + 1 from 200-300 + 200 - 300, you effectively end up with a access that is twice as quick but no where near as big a difference as you are making out.

Also, it should be noted that GPGPU is horrible for AI not because of latency but more because GPU's are inherently bad at code that diverges at many different points as entire wavefronts have to take the same branch, this won't be helped in any way shape or form by having lower latency memory.
 
If your cache takes 200-300 cycles and your memory usually takes say 200-300 cycles as well. Then if you change out the memory for a 1 cycle variant you don't end up with a big a difference as you are thinking. It changes the equation to 200-300 + 1 from 200-300 + 200 - 300, you effectively end up with a access that is twice as quick but no where near as big a difference as you are making out.

Game developers already use tiled rendering techniques to fit the tile in the cache of the GPU, and get a massive speedup over rendering the game into a single monolithic render target. The problem has always been the overhead and latency of moving the tiles between system memory and gpu memory. It matters more in middle sort techniques than late sort techniques because of reading from the g-buffer.

Also, it should be noted that GPGPU is horrible for AI not because of latency but more because GPU's are inherently bad at code that diverges at many different points as entire wavefronts have to take the same branch, this won't be helped in any way shape or form by having lower latency memory.

That depends. GPU's are really good at forward neural nets. Much better than CPU's because each of the nodes in a neural net is a vector operation. Adaptive neural nets on the other hand... Anyhow, all the expensive operations in video game ai are easily quantified for the GPU. Ray casts, cone intersection. But video game AI isn't that big of a deal, because it does not need to be computed every frame.
 
Probably been said but can Xbox one out put 7.1ch surround sound?
 
9PM Pacific is the embargo lift. 12am EST in the US.

Adjust accordingly for your timezone.
 
9PM Pacific is the embargo lift. 12am EST in the US.

Adjust accordingly for your timezone.

Four hours and fifteen minutes from the time of this post (within a minute or so).
 
If your cache takes 200-300 cycles and your memory usually takes say 200-300 cycles as well. Then if you change out the memory for a 1 cycle variant you don't end up with a big a difference as you are thinking. It changes the equation to 200-300 + 1 from 200-300 + 200 - 300, you effectively end up with a access that is twice as quick but no where near as big a difference as you are making out.

What are you calling a "cache"? The eSRAM is where the tiles are being stored and that has MUCH lower latency reportedly than anything approaching several hundred cycles. With the data in eSRAM being snooped by the CPU and seen by the GPU I don't see how it's not the very definition of a cache. It's where you put stuff that you need to sample over and over during a frame.

Also, it should be noted that GPGPU is horrible for AI not because of latency but more because GPU's are inherently bad at code that diverges at many different points as entire wavefronts have to take the same branch, this won't be helped in any way shape or form by having lower latency memory.

You just reiterated what I said while ignoring the rest of my post. Huge chunks of AI code is extremely sensitive to latency. So is physics.
 
Astro, did BK ever confirm (even with what little we know or he's allowed to say about SHAPE) about DTS-MA?
 
What are you calling a "cache"? The eSRAM is where the tiles are being stored and that has MUCH lower latency reportedly than anything approaching several hundred cycles. With the data in eSRAM being snooped by the CPU and seen by the GPU I don't see how it's not the very definition of a cache. It's where you put stuff that you need to sample over and over during a frame.

The CPU access to the eSRAM is incredibly slow (MB/s) so don't hitch any bets to it.

The cache i speak of is the real cache in the GPU (512KB L2 + L1, etc). The eSRAM does not meet the definition of a cache because it lacks hardware lookup (its just a bunch of sRAM). Its a scratchpad due to the lack of this lookup hardware.

BTW it would be nice if you could show us where the latency figures for the eSRAM are coming from since you seem to know the actual number.

It should also be noted that this was said when they were questioned about the latency of the eSRAM giving a performance benefit.

Digital Foundry: There's been some discussion online about low-latency memory access on ESRAM. My understanding of graphics technology is that you forego latency and you go wide, you parallelise over however many compute units are available. Does low latency here materially affect GPU performance?


Nick Baker: You're right. GPUs are less latency sensitive. We've not really made any statements about latency.

From wikipedia with regards to computer caches

In computer science, a cache (/ˈkæʃ/kash)[1] is a component that transparently stores data so that future requests for that data can be served faster.

The eSRAM is missing the bold + underlined.
 
The CPU access to the eSRAM is incredibly slow (MB/s) so don't hitch any bets to it.

...which is all it'd likely ever need.

The cache i speak of is the real cache in the GPU (512KB L2 + L1, etc).

...which has no relevancy to what we were discussing.

The eSRAM does not meet the definition of a cache because it lacks hardware lookup (its just a bunch of sRAM). Its a scratchpad due to the lack of this lookup hardware.

That's not the definition of a cache. MS's engineers specifically called it a cache at the reveal. You are clinging to a narrow understanding of the term, not its actual meaning.

BTW it would be nice if you could show us where the latency figures for the eSRAM are coming from since you seem to know the actual number.

They aren't public but there were ppl on B3d who had noted it a long time ago as being 'orders of magnitude' less than external memory.

It should also be noted that this was said when they were questioned about the latency of the eSRAM giving a performance benefit.

And had you read the rest of the interview you'd see multiple other parts wehre they cite the opposite interpretation of yours there. Seems Baker didn't understand what the question being asked was. They spell out their intentions with latency and GPGPU if you read the interview instead of cherry picking quotes lacking precision and context to suit what you wanted to read. There are at least 3 other parts in the interview that tell us low latency is a vital part of the system's design, including for GPGPU prospects btw.

You can also see the dev docs citing real advantages for the eSRAM's latency in the CB/DB's.

From wikipedia with regards to computer caches

...

The eSRAM is missing the bold + underlined.

No, transparency in the context you are referring to is fully covered for a pool that is seen by all the clients.
 
...which is all it'd likely ever need.



...which has no relevancy to what we were discussing.



That's not the definition of a cache. MS's engineers specifically called it a cache at the reveal. You are clinging to a narrow understanding of the term, not its actual meaning.



They aren't public but there were ppl on B3d who had noted it a long time ago as being 'orders of magnitude' less than external memory.



And had you read the rest of the interview you'd see multiple other parts wehre they cite the opposite interpretation of yours there. Seems Baker didn't understand what the question being asked was. They spell out their intentions with latency and GPGPU if you read the interview instead of cherry picking quotes lacking precision and context to suit what you wanted to read. There are at least 3 other parts in the interview that tell us low latency is a vital part of the system's design, including for GPGPU prospects btw.

You can also see the dev docs citing real advantages for the eSRAM's latency in the CB/DB's.



No, transparency in the context you are referring to is fully covered for a pool that is seen by all the clients.

Honestly not worth your time because there is no changing of mind or desire to understand and the argument will just go round robin indefinitely.
 
Honestly not worth your time because there is no changing of mind or desire to understand and the argument will just go round robin indefinitely.

Its because people are trying to correct him on a topic he doesn't understand nor have the knowledge required to understand it. He doesn't even understand the difference between a scratchpad and cache, which would require 10 seconds to look up, instead he is staunchly sticking to something Microsoft incorrectly said previously.

Im out of this conversation with him, I'm sorry but I just cannot stand someone who is so arrogant and unwilling to learn or be wrong, its as if he's being paid to push a specific point.
 
Last edited:
Honestly not worth your time because there is no changing of mind or desire to understand and the argument will just go round robin indefinitely.

Im gonna vouch for Kb here. Kb is always respectful, and has an interest in the technology behind consoles. Is not here just for platform advocacy.
 
In other news, mrxmedia seems to be spinning completely out of control. Basically rehashing every bit of complete nonsense he posted before as if it were new information. And this is now that people have the hardware in their hands. I think Bill Gates could drive to that guys house, sit him down and say, "son, you've got to stop. None of what you are saying is true, or even physically possible." And he would still keep digging.
 
In other news, mrxmedia seems to be spinning completely out of control. Basically rehashing every bit of complete nonsense he posted before as if it were new information. And this is now that people have the hardware in their hands. I think Bill Gates could drive to that guys house, sit him down and say, "son, you've got to stop. None of what you are saying is true, or even physically possible." And he would still keep digging.
Lol that you of all people, still go there.
 
In other news, mrxmedia seems to be spinning completely out of control. Basically rehashing every bit of complete nonsense he posted before as if it were new information. And this is now that people have the hardware in their hands. I think Bill Gates could drive to that guys house, sit him down and say, "son, you've got to stop. None of what you are saying is true, or even physically possible." And he would still keep digging.

the s*** they make up is quite entertaining though. Have to give them that. Got to make sure the secret sauce 40 tflop gpu continues to exist no matter what.

oh look, mistercteam is in the thread.
 
Last edited:
Probably been said but can Xbox one out put 7.1ch surround sound?

Yes, there's settings for stereo, ss 5.1 and ss 7.1

Anyone with an HDMI receiver should be fine, as we pass the uncompressed 5.1 and 7.1 through HDMI as well as DTS. Even if you have a Dolby only HDMI receiver (which I'm not sure exists), you will still get 5.1 or 7.1 sound since those receivers should accept uncompressed surround.
 
Last edited:
Its because people are trying to correct him on a topic he doesn't understand nor have the knowledge required to understand it. He doesn't even understand the difference between a scratchpad and cache, which would require 10 seconds to look up, instead he is staunchly sticking to something Microsoft incorrectly said previously.

MS didn't incorrectly label what they designed. You are using a narrow usage of the term that isn't the meaning I am using. The eSRAM is a cache. It's not software-managed directly, so you want to call it a scratch pad, which is fine, but don't presume to assert I dunno wtf I'm talking about just because I'm using the actual meaning of the term instead of your narrow jargon.

Likewise, it has nothing to do with the discussion. You just inserted it because you mistakenly assumed my diction was wrong and so you clung to that misconception as a substitute for anything to counter what I said about MS's tiled rendering patent. You aren't here for discussion, you are here to dismiss anything you find favorable in X1 for whatever emotionally tangled reason motivates you.

Im out of this conversation with him, I'm sorry but I just cannot stand someone who is so arrogant and unwilling to learn or be wrong, its as if he's being paid to push a specific point.

After the paragraph you just spewed onto the computer screen before you I am astonished at the gall to claim I'm being arrogant. By all means take your projections elsewhere.
 
Mrx was pulling stuff out of his butt. It was obvious to anyone. Tiled Resources is a real thing. The question is, what does it mean. People with some knowledge of it are trying to explain what they think. There is a legitimate debate. You, with absolutely zero knowledge of it except that you have a bug up your ass about it, and can't seem to allow any positivity surrounding the xbox, have nothing to offer that debate.

If anyone is making claims that TR means the XB1 will run circles around PS4, they're wrong. But the technology itself is not secret sauce, and nothing you can say will make it such.
So you finally admit it, which is what they've been claiming in this thread. There's a thread on B3D solely dedicated to poking fun at Astro's "ESRAM astrophysics speculation". They're grasping at tiny speculative straws and making mountains out of molehills over theoretical and/or minor performance boosts. The TR/ESRAM claims they're making have little or nothing to do with the more rational speculation on shared GCN features, the diff between GCN1.0 and GCN1.1, or API differences.

It's the same kind of straw grasping conspiracy exaggerations as everyone's favorite blog and any rational, logical skeptic can see that. Keep up the baseless false accusations about "not allowing positivity" and hypocritically turning a blind eye, though, it's getting routine.

I'm looking forward to chipworks uncovering the "ESRAM/TR latency proprietary secret hardware modifications" that "puts PS4 at a significant disadvantage."
 
11.jpg

Die shot, and no surprises the eSRAM is huge.
 
I just watched the gamespot xb1 hardware teardown. Is it confirmed the xb1's cpu is more powerful than ps4's? They were claiming it is, but I thought that was just a rumor.
 
I just watched the gamespot xb1 hardware teardown. Is it confirmed the xb1's cpu is more powerful than ps4's? They were claiming it is, but I thought that was just a rumor.

Theres something like a 8/9/10% clock difference between the two but thats it.
 
http://www.chipworks.com/en/technical-competitive-analysis/resources/blog/inside-the-xbox-one

Chipworks delivers. You can see the ~1.6 bil transistor ESRAM shoving out space from the GPU and other components. Too bad they didn't go for a separate EDRAM die or GDDR5. No dGPUs or other silliness, sorry.

PS4's CPU clockrate hasn't been officially confirmed, but it's assumed to be 1.6GHz from vgleaks. It may be 0.15GHz slower, which should have little effect on real world games performance.
 
So misterx (aka "insider") is claiming the chipworks shot is fake and/or manipulated and they didn't "keep digging" to find the 2nd layer of transistors underneath. In misterc's own words, Xbox One is "just cheap jagaur + 12CU + small DSP + SRAM". He seems a bit bitter.

Anyone else still think ESRAM TR latency, upclocks, upscaling, or offloading is going to bring performance "parity"? Where's all those "proprietary secret GPU/ESRAM hardware modifications MS isn't obligated to reveal" that "put PS4 at a significant disadvantage"?

All I see from chipworks is the official specs confirmed and a big chunk of ESRAM hogging space from the GPU and other components.

Meanwhile, back in the real world...
http://www.anandtech.com/show/7546/chipworks-confirms-xbox-one-soc-has-14-cus

diecomparison_575px.jpg