DirectX 12 Coming to Xbox One thread, v. 2

Status
Not open for further replies.

menace-uk-

Starfield Gazer
Sep 11, 2013
37,436
15,470
3,930
So the old thread got locked ( deservedly so) but this is a topic which should remain. Hopefully with a lot of the 'out there' stuff already proven false we can perhaps have a more civil and on point discussion.

DirectX12_678x452.png


What's the big deal?
DirectX 12 introduces the next version of Direct3D, the graphics API at the heart of DirectX. Direct3D is one of the most critical pieces of a game or game engine, and we’ve redesigned it to be faster and more efficient than ever before. Direct3D 12 enables richer scenes, more objects, and full utilization of modern GPU hardware. And it isn’t just for high-end gaming PCs either – Direct3D 12 works across all the Microsoft devices you care about. From phones and tablets, to laptops and desktops, and, of course, Xbox One, Direct3D 12 is the API you’ve been waiting for.

What makes Direct3D 12 better? First and foremost, it provides a lower level of hardware abstraction than ever before, allowing games to significantly improve multithread scaling and CPU utilization. In addition, games will benefit from reduced GPU overhead via features such as descriptor tables and concise pipeline state objects. And that’s not all – Direct3D 12 also introduces a set of new rendering pipeline features that will dramatically improve the efficiency of algorithms such as order-independent transparency, collision detection, and geometry culling.

Of course, an API is only as good as the tools that help you use it. DirectX 12 will contain great tools for Direct3D, available immediately when Direct3D 12 is released.

We think you’ll like this part: DirectX 12 will run on many of the cards gamers already have. More on that in our FAQ.



Is this marketing spin?
We (the product team) read the comments on twitter and game development/gamer forums and many of you have asked if this is real or if our marketing department suddenly received a budget infusion. Everything you are reading is coming directly from the team who has brought you almost 20 years of DirectX.

It’s our job to create great APIs and we have worked closely with our hardware and software partners to prove the significant performance wins of Direct3D 12. And these aren’t just micro-benchmarks that we hacked up ourselves – these numbers are for commercially released game engines or benchmarks, running on our alpha implementation. The screenshots below are from real Direct3D 12 app code running on a real Direct3D 12 runtime running on a real Direct3D 12 driver.



3DMark – Multi-thread scaling + 50% better CPU utilization
If you’re a gamer, you know what 3DMark is – a great way to do game performance benchmarking on all your hardware and devices. This makes it an excellent choice for verifying the performance improvements that Direct3D 12 will bring to games. 3DMark on Direct3D 11 uses multi-threading extensively, however due to a combination of runtime and driver overhead, there is still significant idle time on each core. After porting the benchmark to use Direct3D 12, we see two major improvements – a 50% improvement in CPU utilization, and better distribution of work among threads.

2806.cpucompare.png


Direct3D 11
3125.3dmark11.PNG


Direct3D 12
4276.3dmark12.PNG


Where does this performance come from?

Direct3D 12 represents a significant departure from the Direct3D 11 programming model, allowing apps to go closer to the metal than ever before. We accomplished this by overhauling numerous areas of the API. We will provide an overview of three key areas: pipeline state representation, work submission, and resource access.

Pipeline state objects
Direct3D 11 allows pipeline state manipulation through a large set of orthogonal objects. For example, input assembler state, pixel shader state, rasterizer state, and output merger state are all independently modifiable. This provides a convenient, relatively high-level representation of the graphics pipeline, however it doesn’t map very well to modern hardware. This is primarily because there are often interdependencies between the various states. For example, many GPUs combine pixel shader and output merger state into a single hardware representation, but because the Direct3D 11 API allows these to be set separately, the driver cannot resolve things until it knows the state is finalized, which isn’t until draw time. This delays hardware state setup, which means extra overhead, and fewer maximum draw calls per frame.

Direct3D 12 addresses this issue by unifying much of the pipeline state into immutable pipeline state objects (PSOs), which are finalized on creation. This allows hardware and drivers to immediately convert the PSO into whatever hardware native instructions and state are required to execute GPU work. Which PSO is in use can still be changed dynamically, but to do so the hardware only needs to copy the minimal amount of pre-computed state directly to the hardware registers, rather than computing the hardware state on the fly. This means significantly reduced draw call overhead, and many more draw calls per frame.

Command lists and bundles
In Direct3D 11, all work submission is done via the immediate context, which represents a single stream of commands that go to the GPU. To achieve multithreaded scaling, games also have deferred contexts available to them, but like PSOs, deferred contexts also do not map perfectly to hardware, and so relatively little work can be done in them.

Direct3D 12 introduces a new model for work submission based on command lists that contain the entirety of information needed to execute a particular workload on the GPU. Each new command list contains information such as which PSO to use, what texture and buffer resources are needed, and the arguments to all draw calls. Because each command list is self-contained and inherits no state, the driver can pre-compute all necessary GPU commands up-front and in a free-threaded manner. The only serial process necessary is the final submission of command lists to the GPU via the command queue, which is a highly efficient process.

In addition to command lists, Direct3D 12 also introduces a second level of work pre-computation, bundles. Unlike command lists which are completely self-contained and typically constructed, submitted once, and discarded, bundles provide a form of state inheritance which permits reuse. For example, if a game wants to draw two character models with different textures, one approach is to record a command list with two sets of identical draw calls. But another approach is to “record” one bundle that draws a single character model, then “play back” the bundle twice on the command list using different resources. In the latter case, the driver only has to compute the appropriate instructions once, and creating the command list essentially amounts to two low-cost function calls.

Descriptor heaps and tables
Resource binding in Direct3D 11 is highly abstracted and convenient, but leaves many modern hardware capabilities underutilized. In Direct3D 11, games create “view” objects of resources, then bind those views to several “slots” at various shader stages in the pipeline. Shaders in turn read data from those explicit bind slots which are fixed at draw time. This model means that whenever a game wants to draw using different resources, it must re-bind different views to different slots, and call draw again. This is yet another case of overhead that can be eliminated by fully utilizing modern hardware capabilities.

Direct3D 12 changes the binding model to match modern hardware and significantly improve performance. Instead of requiring standalone resource views and explicit mapping to slots, Direct3D 12 provides a descriptor heap into which games create their various resource views. This provides a mechanism for the GPU to directly write the hardware-native resource description (descriptor) to memory up-front. To declare which resources are to be used by the pipeline for a particular draw call, games specify one or more descriptor tables which represent sub-ranges of the full descriptor heap. As the descriptor heap has already been populated with the appropriate hardware-specific descriptor data, changing descriptor tables is an extremely low-cost operation.

In addition to the improved performance offered by descriptor heaps and tables, Direct3D 12 also allows resources to be dynamically indexed in shaders, providing unprecedented flexibility and unlocking new rendering techniques. As an example, modern deferred rendering engines typically encode a material or object identifier of some kind to the intermediate g-buffer. In Direct3D 11, these engines must be careful to avoid using too many materials, as including too many in one g-buffer can significantly slow down the final render pass. With dynamically indexable resources, a scene with a thousand materials can be finalized just as quickly as one with only ten.

Forza 5 DirectX 12 tech demo shows off PC race footage
 
so there is a second xbox one coming out this year?

or are you talking about 4-5 years from now?
 
Why would there be ? MS has said the xbox one will fully use DX12. I'm trying to find this tweet to add to the OP.

I got your back brah.



I'll throw this one in for good measure:



And why not this one:



Which includes the following snippets for those who care:

Every Xbox One will be capable of running DirectX 12 games as well, he said, alongside many Windows Phones.

and

Specifically, AMD "Graphics Core Next" GPUs, broadly HD 7000 and up, and NVIDIA Fermi, Kepler, and Maxwell GPUs (which is to say, all of the company's DirectX 11 parts) will all support DirectX 12.

and this

"People who have our current CPUs, they get the benefit of this API on day one," AMD's Raja Khodury said on stage. "And it's not a small benefit. It's... like getting four generations of hardware ahead with this API."

Hopefully that helps people keep on track. Be nice, mmmkay?
 
Last edited:
Nice. So DX12 titles are coming out in earliest 2016, yeah?

If there are enough great exclusives out, price is right and the performance is up to par with the PS4's, then I'm going to jump in.
 
keep in mind that while SONY did a great job on the hardware (which they did), they cant upgrade that is done and out. MS on the other hand is a software company that can do so many tricks to the xbox one for years to come. Look what they did for xbox 360.
 
Nice. So DX12 titles are coming out in earliest 2016, yeah?

If there are enough great exclusives out, price is right and the performance is up to par with the PS4's, then I'm going to jump in.

No, potentially much earlier. Dev's can gain early access, and start patching existing games or dev'ing with the new API much sooner than 2015.
 
Why would there be ? MS has said the xbox one will fully use DX12. I'm trying to find this tweet to add to the OP.
It was a direct Phil S. tweet from last week:

@XboxP3: @Nicodemus9 DX12 will have impact on XBOX One games written for DX12. Some DX12 features are already in XBOX One but full DX12 coming.
 
  • Like
Reactions: menace-uk-
So is X1 the only console with a next gen, native DX12 graphics card. Only native DX12 cards will run full DX12 which thanks to Phil Spencers tweet we know X1 runs FULL DX12.
 
So is X1 the only console with a next gen, native DX12 graphics card. Only native DX12 cards will run full DX12 which thanks to Phil Spencers tweet we know X1 runs FULL DX12.

This is the most interesting thing about all of this to me. We can speculate all day about how much benefit X1 games will receive from DX12, but it was clearly stated that to fully utilize it, a new GPU would be required.
So either Phil used incorrect wording, or the X1 has a beefier GPU than was at first perceived. We know it's customized, but we still don't have all the specifics about it.
 
This is the most interesting thing about all of this to me. We can speculate all day about how much benefit X1 games will receive from DX12, but it was clearly stated that to fully utilize it, a new GPU would be required.
So either Phil used incorrect wording, or the X1 has a beefier GPU than was at first perceived. We know it's customized, but we still don't have all the specifics about it.

yeah, agreed. I thought of this as well.. they keep saying X1 will FULLY support DX12, and MS DID spend 3 Billion in R&D... makes you wonder what exactly is going on here. No misterxmedia s***, but literally are we being fed a line of PR, or is the GPU capable of more than we know. Not comparing it to PS4, that is for fanboys. What I am saying is, what if the GPU does have the ability to use certain code that we aren't aware of.
 
yeah, agreed. I thought of this as well.. they keep saying X1 will FULLY support DX12, and MS DID spend 3 Billion in R&D... makes you wonder what exactly is going on here. No misterxmedia s***, but literally are we being fed a line of PR, or is the GPU capable of more than we know. Not comparing it to PS4, that is for fanboys. What I am saying is, what if the GPU does have the ability to use certain code that we aren't aware of.

they didn't spend 3 billion in R&D....3 billion is the worth of the entire contract, including R&D of the entire SoC, manufacturing etc.
 
yeah, agreed. I thought of this as well.. they keep saying X1 will FULLY support DX12, and MS DID spend 3 Billion in R&D... makes you wonder what exactly is going on here. No misterxmedia s***, but literally are we being fed a line of PR, or is the GPU capable of more than we know. Not comparing it to PS4, that is for fanboys. What I am saying is, what if the GPU does have the ability to use certain code that we aren't aware of.

Xb1 is highly customized vs the ps4 while not a stock gcn gpu has far less customized parts. I know must people would call me crazy but by 2015 I 100% believe that with the hardware customizations^ of the Xb1 combined with Tiled Resources^ Xb1 will have the majority of the better multi-plats, and will be crowned the graphical king.


^1( extra graphics command processor, 17mb of cache, audio processors, move engines, plus the other 35 undisclosed microcontrollers )

^2(not just prt and can't be fully realized without the tiled based deferred render hardware not in any other card before R9 280 and not in any current Nvidia GPU)
 
they didn't spend 3 billion in R&D....3 billion is the worth of the entire contract, including R&D of the entire SoC, manufacturing etc.

Yes, we are aware. You have stated this numerous times without specifics on the breakdown of what was spent on what. While I don't believe you're doing it intentionally, you are really downplaying that 3 billion dollars was spent on this contract, much of which probably DID go to R&D. That is a hell of a lot of money, even if it is over the course of the contract. Not to mention it probably does not include the amount of money that MS spent "internally" developing the X1.
 
Yes, we are aware. You have stated this numerous times without specifics on the breakdown of what was spent on what. While I don't believe you're doing it intentionally, you are really downplaying that 3 billion dollars was spent on this contract, much of which probably DID go to R&D. That is a hell of a lot of money, even if it is over the course of the contract. Not to mention it probably does not include the amount of money that MS spent "internally" developing the X1.

Exactly.. they spent 3B... R&D is costly, manufacturing and design are not so costly. So it is a simple extrapolation that most of the budget, easily could assume 50% or more, went to R&D.

I know my company is small compared to MS, but we spend significantly more on R&D than we do production.
 
Yes, we are aware. You have stated this numerous times without specifics on the breakdown of what was spent on what. While I don't believe you're doing it intentionally, you are really downplaying that 3 billion dollars was spent on this contract, much of which probably DID go to R&D. That is a hell of a lot of money, even if it is over the course of the contract. Not to mention it probably does not include the amount of money that MS spent "internally" developing the X1.

no, much of it did not go to R&D. As I pointed out in the other thread, the SoC is probably costing Microsoft $75-$100. that's alone would be $600-$700+ million dollars on the high end thus far, up to $1 billion+ dollars by the end of the year, depending on how much Microsoft has shipped on a multi-year contract worth 3+ billion dollars. We are talking about specifically the SoC. I would put money on the SoC R&D not being more than $500million.

it's not like the hardware is fully custom that was created from scratch. We know the SoC is semi-custom.
 
Last edited:
no, much of it did not go to R&D. As I pointed out in the other thread, the SoC is probably costing Microsoft $75-$100. that's alone would be $600-$700+ million dollars on the high end thus far, up to $1 billion+ dollars by the end of the year, depending on how much Microsoft has shipped on a multi-year contract worth 3+ billion dollars. We are talking about specifically the SoC. I would put money on the SoC R&D not being more than $500million.

Sorry, but now you're just throwing random numbers around without sources to back them up. Again, I don't think you're being deliberately obtuse about it, but you're oversimplifying everything and throwing numbers around based solely on your speculation.

I can just as easily say that for a multi-year contract of this magnitude, that it's actually unlikely the SoC costs as much as you say it does, that any cost to MS for the SoC over the course of the contract INCLUDES R&D, and as such isn't easily treated as separate. You want to quantify them as separate costs, but if it costs MS $75-100 per SoC, it mostly includes R&D in that cost.

I went back to check something from their "Building Xbox One" article/tour and here you go:

"In different levels, we were working on five custom-designed components. Silicon components. Three of them going to the console and two of them to the sensor," Spillinger explains. That's the SoC that drives the console, the CMOS processor in the new Kinect, I/O integrators in both Kinect and the console and a digital signal processor on the Blu-ray drive. For the four gentlemen who show us around the Mountain View campus and scads of others we don't meet, getting to the point where so much of that silicon was designed and verified in-house is the fruition of years of work.

It's a major shift away from the company's past reliance on external partners, with only AMD serving as collaborator this time around. And like any game console launch, it's another huge investment for the next... five, eight, 10 years? That's an unknown, of course, but it seems likely based on history that we'll have the Xbox One for the foreseeable future. Whatever the future dictates, it looks like we'll see internally developed chips in many of Microsoft's products going forward.

Again, I think you're oversimplifying the details.
 
Last edited:
you're right my numbers are estimations since we don't have office numbers on a breakdown level.

But, the 3 billion dollars was not R&D. It's AMD's revenue. that number came from somebody at AMD
http://www.fudzilla.com/home/item/31504-amd’s-xbox-deal-worth-more-than-$3-billion
http://www.gamespot.com/articles/xbox-one-deal-with-amd-costs-over-3-billion/1100-6408926/

We really have ZERO idea how much was spent on R&D.

Actually, you should look at what I added to my post. I'm not say all 3+ billion of the contract was spent on R&D; I'm just saying that you're simplifying it too much, and in reality it looks like much of it was handled by MS this time around with AMD serving as collaborators on the design.
 
Last edited by a moderator:
Version 2.0, now with more downplay :D


They need to just dive in already, the One needs some good PR. Start dropping it.
 
  • Like
Reactions: Kvally
My points still stand from the first thread.

With the power of the cloud and Direct X 12 the sky's the limit for what the Xbox One can and will achieve. Also, if you don't understand the tech behind it then don't worry about it. This is going to be huge for the One.
 
Meh@ the cloud. See Titanfall.

DX12, now we are talking. I am looking forward to the benefits and I hope this isn't another PR stunt like the one they pulled with DX10. I pray that DX12 brings these benefits because that would mean performance parity with OpenGL/OpenCL.
 
Meh@ the cloud. See Titanfall.

Look at what they did in Forza Motorsport 5 with the cloud. The Drivatars added so much to the game. It's crazy to think about how much potential the cloud has. Combine that with DX12 and it's over.
 
Azure is overhyped to a degree, but overall the AI functions, Dedicated Servers and a handful of other showings have been great thus far.
 
I'll reserve judgment until I see some actual results out. Exciting news though for Xbox One owners.
 
Status
Not open for further replies.