DirectX 12 Coming to Xbox One thread, v. 2

Status
Not open for further replies.
http://www.neowin.net/news/directx-12-a-game-changer-for-xbox-one

DirectX 12: A game changer for Xbox One

This week at Build, Microsoft unveiled their new graphics stack, DirectX 12, which was demonstrated to more than double performance on existing hardware and work on all Microsoft platforms.

Earlier this year, AMD's new Mantle platform demonstrated what was possible if the graphics stack could effectively utilize all the cores on modern CPUs. The result was a massive boost in performance in Mantle enabled games with very little developer effort as shown by Oxide Games and Battlefield 4. The challenge for AMD is that Mantle is currently PC only and restricted to recent AMD video cards. If only there was a graphics stack that supported multi-core processing that worked on all video cards...

Xbox One gets a major upgrade

Meanwhile, Microsoft's Xbox One has been well received but has been criticized for being measurably behind Sony's Playstation 4 in terms of game performance. Microsoft has responded with DirectX 12. With relatively little effort by developers, upcoming Xbox One games, PC Games and Windows Phone games will see a doubling in graphics performance.

Suddenly, that Xbox One game that struggled at 720p will be able to reach fantastic performance at 1080p. For developers, this is a game changer.

The Secret Sauce

Microsoft was able to achieve the performance breakthroughs through two major changes to DirectX 12:
1.Bundles
2.Parallel rendering

Most of the performance gain is a result of DirectX 12 making full use of multiple CPU cores. For example, on DirectX 11 a typical game would perform like this:

http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_6.jpg

As you can see, thread 0 (the starting thread for the game) is doing most of the work. In fact, DirectX 11 is barely being utilized by the other threads.

But on DirectX 12, the situation changes dramatically:
http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_7.jpg


Not only is DirectX 12 more efficient in its own right but the interaction with the GPU is evenly spread between each CPU core.
http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_2.jpg

The results are spectacular. Not just in theory but in practice (full disclosure: I am involved with the Star Swarm demo which makes use of this kind of technology.) While each generation of video card struggles to gain substantial performance over the previous generation, here, the same hardware will suddenly see a doubling of performance.

XBox One is the biggest beneficiary; it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one.



No free lunch

There is a downside for all this power. DirectX 12 games will be the first games to fully utilize the powerful graphics cards gamers have been buying for the past few years. DirectX 12 won't require a new video card for GPUs that already support DirectX 11. As a result, that GPU will be getting pushed twice as hard as it previously was which means more heat on cards that might have only barely been cool enough when they were only being commanded by a single CPU core. We expect to see many marginal video cards setup to experience over heating issues as DirectX 12 suddenly pushes these cards beyond what the IHV had anticipated.

In addition, while DirectX 11 was a relatively plug and play experience for developers, DirectX 12 puts a much greater emphasis on developer control. Developers can and are expected to manage conflicts and low level pipelining to make the most of the new platform. This is a good thing for experienced graphics developers but will put less experienced developers at a disadvantage.

DirectX Unleashed

Not content to merely double performance, Microsoft also announced that they were going to make DirectX 12 available on Xbox One, Windows Phone and Windows PC. This means a single graphics API for multiple platforms. In addition, Microsoft indicated that they were making DirectX 12 their go-to solution for all graphics intensive tasks -- not just games.

Microsoft hasn't provided a release date for DirectX 12 but we expect to hear more at E3 this Spring.

Well that settles it... 1080p in da house!
 
http://www.neowin.net/news/directx-12-a-game-changer-for-xbox-one

DirectX 12: A game changer for Xbox One

This week at Build, Microsoft unveiled their new graphics stack, DirectX 12, which was demonstrated to more than double performance on existing hardware and work on all Microsoft platforms.

Earlier this year, AMD's new Mantle platform demonstrated what was possible if the graphics stack could effectively utilize all the cores on modern CPUs. The result was a massive boost in performance in Mantle enabled games with very little developer effort as shown by Oxide Games and Battlefield 4. The challenge for AMD is that Mantle is currently PC only and restricted to recent AMD video cards. If only there was a graphics stack that supported multi-core processing that worked on all video cards...

Xbox One gets a major upgrade

Meanwhile, Microsoft's Xbox One has been well received but has been criticized for being measurably behind Sony's Playstation 4 in terms of game performance. Microsoft has responded with DirectX 12. With relatively little effort by developers, upcoming Xbox One games, PC Games and Windows Phone games will see a doubling in graphics performance.

Suddenly, that Xbox One game that struggled at 720p will be able to reach fantastic performance at 1080p. For developers, this is a game changer.

The Secret Sauce

Microsoft was able to achieve the performance breakthroughs through two major changes to DirectX 12:
1.Bundles
2.Parallel rendering

Most of the performance gain is a result of DirectX 12 making full use of multiple CPU cores. For example, on DirectX 11 a typical game would perform like this:

http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_6.jpg

As you can see, thread 0 (the starting thread for the game) is doing most of the work. In fact, DirectX 11 is barely being utilized by the other threads.

But on DirectX 12, the situation changes dramatically:
http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_7.jpg


Not only is DirectX 12 more efficient in its own right but the interaction with the GPU is evenly spread between each CPU core.
http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_2.jpg

The results are spectacular. Not just in theory but in practice (full disclosure: I am involved with the Star Swarm demo which makes use of this kind of technology.) While each generation of video card struggles to gain substantial performance over the previous generation, here, the same hardware will suddenly see a doubling of performance.

XBox One is the biggest beneficiary; it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one.



No free lunch

There is a downside for all this power. DirectX 12 games will be the first games to fully utilize the powerful graphics cards gamers have been buying for the past few years. DirectX 12 won't require a new video card for GPUs that already support DirectX 11. As a result, that GPU will be getting pushed twice as hard as it previously was which means more heat on cards that might have only barely been cool enough when they were only being commanded by a single CPU core. We expect to see many marginal video cards setup to experience over heating issues as DirectX 12 suddenly pushes these cards beyond what the IHV had anticipated.

In addition, while DirectX 11 was a relatively plug and play experience for developers, DirectX 12 puts a much greater emphasis on developer control. Developers can and are expected to manage conflicts and low level pipelining to make the most of the new platform. This is a good thing for experienced graphics developers but will put less experienced developers at a disadvantage.

DirectX Unleashed

Not content to merely double performance, Microsoft also announced that they were going to make DirectX 12 available on Xbox One, Windows Phone and Windows PC. This means a single graphics API for multiple platforms. In addition, Microsoft indicated that they were making DirectX 12 their go-to solution for all graphics intensive tasks -- not just games.

Microsoft hasn't provided a release date for DirectX 12 but we expect to hear more at E3 this Spring.

BOOM
My favourite part " Xbox One is the biggest beneficiary; it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one"

Thanks for posting Starlight, you've made my day. :)
 
  • Like
Reactions: griff_dai
If true that must mean the current API is incredibly inefficient and wasteful.

'it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one' is a bit of a sensationalist statement but in reality it translates to doubling the utilization of the GPU.
 
If true that must mean the current API is incredibly inefficient and wasteful.

'it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one' is a bit of a sensationalist statement but in reality it translates to doubling the utilization of the GPU.

The dual graphics command processors is ideal for parallel rendering, in fact, I'm sure that is what they are there for.
Most newer gcn cards have a setup of either 3 command processors (2 compute 1 graphics) or 1 command processor with multiple sources. DX12 will be of more benefit to the cards with 3 command processors but none will benefit more than xbox one (4 command processors) which dx12 is based on.

I'm still trying to figure out what the overall benefit of dx12 will be for the xbox one. So far we know that the cpu will get 50% performance boost (using 6 cores @ 1.75 = 84 gflops * 1.5 = 126 gflops), we will get up to 20 gflops per from increase (through better memory management, multi core scalability swizzled resources and much deeper access controls), and we now get double the rendering output through parallel rendering and bundles. There is also the fact that we still don't know everything that is in the xbox since MS has yet to disclose 35 of the 50 micro controllers in the apu.

With all the custom stuff ms put into the xbox and the efficiency gains from dx12, this 1.32 tflop dx12 system should perform like a 3tflop dx11 system.
 
  • Like
Reactions: starlight777
I'm still trying to figure out what the overall benefit of dx12 will be for the xbox one. So far we know that the cpu will get 50% performance boost (using 6 cores @ 1.75 = 84 gflops * 1.5 = 126 gflops), we will get up to 20 gflops per from increase (through better memory management, multi core scalability swizzled resources and much deeper access controls), and we now get double the rendering output through parallel rendering and bundles. There is also the fact that we still don't know everything that is in the xbox since MS has yet to disclose 35 of the 50 micro controllers in the apu.

With all the custom stuff ms put into the xbox and the efficiency gains from dx12, this 1.32 tflop dx12 system should perform like a 3tflop dx11 system.

The CPU and GPU will not see a maximum theoretical floating point increase through software. A Jaguar core is limited by its clock speed and and micro operations/instructions capable per each one of those clock cycles. Same goes for the GCN GPU, clock speed dictates the maximum theoretical flops.

Six cores of the CPU will only ever be capable of 84 gflops maximum at 100% utilization.
The GPU will only ever be capable of 1.31 TFLOPS maximum at 100% utilization.

The hardware is bound by its physical design limitations but the job of the software it to get as close to and sustain that set in stone theoretical limit. Software alone cannot alter the physical characteristics of the hardware but it can utilize what is there to get the most from it.

Any theoretical FLOP increases would only be possible through a clock speed increase or hardware modification.
 
3tflops is wishful thinking. The hardware will only perform more efficiently within its specifications. DX12 isn't going to magically upgrade the hardware in the X1. This is starting to sound like the XBox 360 all over again when DX10 came out for Windows.
 
The CPU and GPU will not see a maximum theoretical floating point increase through software. A Jaguar core is limited by its clock speed and and micro operations/instructions capable per each one of those clock cycles. Same goes for the GCN GPU, clock speed dictates the maximum theoretical flops.

Six cores of the CPU will only ever be capable of 84 gflops maximum at 100% utilization.
The GPU will only ever be capable of 1.31 TFLOPS maximum at 100% utilization.

The hardware is bound by its physical design limitations but the job of the software it to get as close to and sustain that set in stone theoretical limit. Software alone cannot alter the physical characteristics of the hardware but it can utilize what is there to get the most from it.

Any theoretical FLOP increases would only be possible through a clock speed increase or hardware modification.
Read what I wrote again, I NEVER ONCE said it would increase theoretical flop performance, I said: " with all the custom stuff ms put into the xbox and the efficiency gains from dx12, this 1.32 tflop dx12 system should perform like a 3tflop dx11 system", meaning a more efficient system w/ x amount of power will be able to perform like a less efficient system with y amount of power.

The system will always be 1.31 tflop for gpu and 112 gflop for the cpu, BUT with efficiency gains from dx12 it will ACT like a 3tflop dx11 system.

Edit: When I think about it this parallel processing reminds me of double pumping from ddr. At the beginning of a cycle one command processor will be doing something, then at the end of the cycle the second command processor will be doing something, essentially doubling output per cycle, and we know this works because that is how ddr works.
 
Last edited:
Read what I wrote again, I NEVER ONCE said it would increase theoretical flop performance, I said: " with all the custom stuff ms put into the xbox and the efficiency gains from dx12, this 1.32 tflop dx12 system should perform like a 3tflop dx11 system", meaning a more efficient system w/ x amount of power will be able to perform like a less efficient system with y amount of power.

The system will always be 1.31 tflop for gpu and 112 gflop for the cpu, BUT with efficiency gains from dx12 it will ACT like a 3tflop dx11 system.

You did say the CPU would get a flops increase...

So far we know that the cpu will get 50% performance boost (using 6 cores @ 1.75 = 84 gflops * 1.5 = 126 gflops), we will get up to 20 gflops per from increase (through better memory management, multi core scalability swizzled resources and much deeper access controls)

Again, a system cannot 'ACT' like a system with higher capabilities. The output will not be 126 GFLOPS for the CPU ever, likewise the the GPU will never output over 1.31TFLOPS.

But if you want to word it as I *think* you mean it, then you would have to say that the CPU can get closer to its theoretical maximum than, say, a more capable CPU of the same architecture though better utilization within its limits.

Likewise, a 1.31 TFLOPS GPU could only perform like a 3 TFLOPS GPU of the same architecture if the 3 TFLOPS GPU was only half utilized and the other fully utilized.
 
Last edited:
Lets all jerk off to semantics.

vum.png
 
You did say the CPU would get a flops increase...



Again, a system cannot 'ACT' like a system with higher capabilities. The output will not be 126 GFLOPS for the CPU ever, likewise the the GPU will never output over 1.31TFLOPS.

But if you want to word it as I *think* you mean it, then you would have to say that the CPU can get closer to its theoretical maximum than, say, a more capable CPU of the same architecture though better utilization within its limits.

Likewise, a 1.31 TFLOPS GPU could only perform like a 3 TFLOPS GPU of the same architecture if the 3 TFLOPS GPU was only half utilized and the other fully utilized.
I was talking performance, the quote even said performance in it, not theoretical maximums.

And you about the gpu you are right and wrong at the same time a slower gpu from the same arch can outperform a faster gpu from the same arch if it has enough modifications. Like my car for instance I have a 2003 eclipse GS stock, but my friend with a 2002 RS modified his and it can beat my car in acceleration and top speed.
 
You did say the CPU would get a flops increase...



Again, a system cannot 'ACT' like a system with higher capabilities. The output will not be 126 GFLOPS for the CPU ever, likewise the the GPU will never output over 1.31TFLOPS.

But if you want to word it as I *think* you mean it, then you would have to say that the CPU can get closer to its theoretical maximum than, say, a more capable CPU of the same architecture though better utilization within its limits.

Likewise, a 1.31 TFLOPS GPU could only perform like a 3 TFLOPS GPU of the same architecture if the 3 TFLOPS GPU was only half utilized and the other fully utilized.
Well, all I know is that my old Genesis with a 7 mhz cpu and 64k ram, and SNES with a 3 mhz cpu and 128k ram ran rings around my 386 PC running at 33 mhz and 1mb of ram. Sure, anything that was complex and turn-based was better on PC. But anything resembling typical arcadey action was much better on consoles.

It probably wasn't until the mid-90s when a PC could do sprite-based action with parallax scrolling as smooth as a 1990 16-bit console. SNES also had to process its higher quality sound chip too, which was amazing since audio back then was a big time system hog. All done with an archaic 3 mhz cpu.
 
All this pseudo-technical talk makes me nostalgic for the old Astro/Ketto/Puppeteer hoedown-throwdowns.:oops:
Yeah. And it's too bad TXB is gone since Astro and Kong Rudi (or Mercy Lack.... I always get those two mixed up) had that hilarious "launch game bet". I forget what the parameters were, but it would be funny to know what the results were.
 
Well, all I know is that my old Genesis with a 7 mhz cpu and 64k ram, and SNES with a 3 mhz cpu and 128k ram ran rings around my 386 PC running at 33 mhz and 1mb of ram. Sure, anything that was complex and turn-based was better on PC. But anything resembling typical arcadey action was much better on consoles.

It probably wasn't until the mid-90s when a PC could do sprite-based action with parallax scrolling as smooth as a 1990 16-bit console. SNES also had to process its higher quality sound chip too, which was amazing since audio back then was a big time system hog. All done with an archaic 3 mhz cpu.

Off topic but an interesting aside -This actually drove Carmack to develop smooth scrolling for the PC in the early nineties, from which Commander Keen was born. Id also ported Super Mario Bros 3 to the PC and presented it to Nintendo, who were impressed but obviously not interested in licensing the port.
 
Off topic but an interesting aside -This actually drove Carmack to develop smooth scrolling for the PC in the early nineties, from which Commander Keen was born. Id also ported Super Mario Bros 3 to the PC and presented it to Nintendo, who were impressed but obviously not interested in licensing the port.
Yeah. I'm no techie, but the difference between PCs and consoles back then was incredibly huge, yet consoles could do certain things so much better. On the other hand, PCs could match or surpass it using truly brute force with it's better specs.

Just goes to show that specs are just one factor.

You still need the most important thing... the people (support) factor. Sega Master System was a technically better system than NES, yet with it's huge support NES had much better games (even if you excluded the shovelware). Aside from Nintendo's recent Wii and Wii U, their Gamecube and N64 were just as good or better than Sony, Sega or MS systems yet the games released for them were overall meh. PS1 and PS2 had the overall best games and those systems were weaker than N64, Xbox and GC.
 
If true that must mean the current API is incredibly inefficient and wasteful.

'it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one' is a bit of a sensationalist statement but in reality it translates to doubling the utilization of the GPU.

Or it simply means that the hardware was specially built around unfinished software that it was intended for. I guess that is saying the same thing, but from a different perspective.

It's like now that kooky architecture in the X1 is starting to make more sense. Every component seems to be designed around low latency, compress/decompress, and offloading. It is a DX12/cloud box.
 
Last edited:
Or it simply means that the hardware was specially built around unfinished software that it was intended for. I guess that is saying the same thing, but from a different perspective.

It's like now that kooky architecture in the X1 is starting to make more sense. Every component seems to be desingend around low latency, compress/decompress, and offloading. It is a DX12/cloud box.
That's a really good explanation actually....Good job!
 
...
It's like now that kooky architecture in the X1 is starting to make more sense. Every component seems to be designed around low latency, compress/decompress, and offloading. It is a DX12/cloud box.

^ that's it but will take at least wave 3 games to make good use.
 
http://www.neowin.net/news/directx-12-a-game-changer-for-xbox-one

DirectX 12: A game changer for Xbox One

This week at Build, Microsoft unveiled their new graphics stack, DirectX 12, which was demonstrated to more than double performance on existing hardware and work on all Microsoft platforms.

Earlier this year, AMD's new Mantle platform demonstrated what was possible if the graphics stack could effectively utilize all the cores on modern CPUs. The result was a massive boost in performance in Mantle enabled games with very little developer effort as shown by Oxide Games and Battlefield 4. The challenge for AMD is that Mantle is currently PC only and restricted to recent AMD video cards. If only there was a graphics stack that supported multi-core processing that worked on all video cards...

Xbox One gets a major upgrade

Meanwhile, Microsoft's Xbox One has been well received but has been criticized for being measurably behind Sony's Playstation 4 in terms of game performance. Microsoft has responded with DirectX 12. With relatively little effort by developers, upcoming Xbox One games, PC Games and Windows Phone games will see a doubling in graphics performance.

Suddenly, that Xbox One game that struggled at 720p will be able to reach fantastic performance at 1080p. For developers, this is a game changer.

The Secret Sauce

Microsoft was able to achieve the performance breakthroughs through two major changes to DirectX 12:
1.Bundles
2.Parallel rendering

Most of the performance gain is a result of DirectX 12 making full use of multiple CPU cores. For example, on DirectX 11 a typical game would perform like this:

http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_6.jpg

As you can see, thread 0 (the starting thread for the game) is doing most of the work. In fact, DirectX 11 is barely being utilized by the other threads.

But on DirectX 12, the situation changes dramatically:
http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_7.jpg


Not only is DirectX 12 more efficient in its own right but the interaction with the GPU is evenly spread between each CPU core.
http://draginol.stardock.net/images2014/AdventuresinDirectX12_10589/image_2.jpg

The results are spectacular. Not just in theory but in practice (full disclosure: I am involved with the Star Swarm demo which makes use of this kind of technology.) While each generation of video card struggles to gain substantial performance over the previous generation, here, the same hardware will suddenly see a doubling of performance.

XBox One is the biggest beneficiary; it effectively gives every Xbox One owner a new GPU that is twice as fast as the old one.



No free lunch

There is a downside for all this power. DirectX 12 games will be the first games to fully utilize the powerful graphics cards gamers have been buying for the past few years. DirectX 12 won't require a new video card for GPUs that already support DirectX 11. As a result, that GPU will be getting pushed twice as hard as it previously was which means more heat on cards that might have only barely been cool enough when they were only being commanded by a single CPU core. We expect to see many marginal video cards setup to experience over heating issues as DirectX 12 suddenly pushes these cards beyond what the IHV had anticipated.

In addition, while DirectX 11 was a relatively plug and play experience for developers, DirectX 12 puts a much greater emphasis on developer control. Developers can and are expected to manage conflicts and low level pipelining to make the most of the new platform. This is a good thing for experienced graphics developers but will put less experienced developers at a disadvantage.

DirectX Unleashed

Not content to merely double performance, Microsoft also announced that they were going to make DirectX 12 available on Xbox One, Windows Phone and Windows PC. This means a single graphics API for multiple platforms. In addition, Microsoft indicated that they were making DirectX 12 their go-to solution for all graphics intensive tasks -- not just games.

Microsoft hasn't provided a release date for DirectX 12 but we expect to hear more at E3 this Spring.

 
because 768 means 1536 with upcoming AMD XTX CU
do you know why Vgleak show per CU PS4 32 DP ops ---> it is Sea island 1:2 DP
do you know why yukon show 64 ALUs or 64 DP ops ---> it is VI in tiles (2x wider)

below articles leaked that carizo SOC/APU will use VI in tiles (XTX in logic)
http://www.brightsideofnews.com/new...-details-leaked---soc-for-the-mainstream.aspx


And the Carrizo will offer 768 SP (but actually in tiles), check Ascii.jp analyst AMD nextgen GPU/APU roadmap

as we know DX12 can not make old DX11.1 into 100% or more
it can make maximum 30-50%, look at the slide.

The stardock CEO comment about > 100% is because he knows what inside X1
as X1 have part that is FULL DX12, yet there is also DX11.1 one like AMD slide showed
 
Status
Not open for further replies.