The article is a bunch of made up crap. The small EDRAM pool gives a similar memory bandwidth boost as the ESRAM does in the Xbox One, but slightly lower of a boost, not some magical 500GB+/second bandwidth.
Just.. completely invented nonsense really.
Same with the idea that the Wii-U is more powerful than the Xbox One. Xbox One's GPU is at least 3 times as powerful as the Wii-U's.. which is much closer to the 360 in overall power than it is to the Xbox One.
Jesus.. this thread..
Okay, finally some free time and I'll try to keep it short:
First up GPU comparison:
Now you are a person that only seems to look to flops, which is fine for raw throughput but it is a set calculation from a measurement of a benchmark. For instance a benchmark that computes the lengths of a vector, and tries to do that as often as possible.
It however doesn't tell us much outside of the throughput of that particular benchmark. The GPU in a Wii U is far newer than the one in the consoles you compare it to. Where first it was expected to be based on an R700 (4000 series Radeon), newer SDK's gave away that it is actually closer to a 5000 series and probably 6000 series. Now the R700 is a generation later than the Xenos (or R500) of the Xbox 360 and 2 generations later than the RSX which is still based on non unified shader technology. If it however is closer to the 6000 series as expected than we are actually 4 and 5 iterations away from these machines' graphics processors.
Now why does this matter, with new generations come new instructions and more efficient ways of doing things. Now I won't bore you with the actual instruction sets and additions, but I can give you a simple example based on the sine function.
Let's say you have a hypothetical processor one doesn't support multiplication, while the other one does here is an example of a simple function to square a number, also for the demonstration we only work with positive integer numbers (not floating point):
Proc A:
Code:
//First we wish for a function that multiplies a number
uint mul(uint a, uint b)
{
return b == 0 ? 0 : a + mul(a, --b);
}
uint square(uint a)
{
return mul(a, a);
}
Proc B supports a multiply function:
Code:
uint square(uint a)
{
return a * a;
}
Now let us assume for a moment that the multiplication is a bit more expensive than a simple addition, say four times as expensive.So now we can calculate how the two stack up to each other. We give both the same benchmark value of 4 IPS (Instructions per second). The addition will cost us 1 instruction to pull and the multiplication will costs us 4 (It is not this simple of course, there is an if statement in here that also would require some instructions in a better example
)
So let's use the square of 2 and 4 as numbers to square:
A would need the following calls costing 1 instruction each:
mul(2,2) + mul(2,1), mul(2,0) = 3 instructions.
mul(4,4) + mul(4,3) + mul(4,2) + mul(4,1) + mul(4,0) = 5 instructions
So to A requires: a + 1 instructions to finish a square.
Now for B:
4 instructions
.
So now we are going to please the Sony Fanboy in how much MOAR % one number is over another:
Incase of 2:
A would be 33% faster than B.
Incase of 4:
A would be 25% slower than B.
B would be 20% faster than A.
Now the higher the square becomes the more A will start to lag behind.
Now back to the Wii U and it's newer GPU. Given this example and the knowledge that the Wii U actually has a few instructions that provide a more efficient manner of taking advantage of the the Laté. You should be able to see why it is actually quite a bit more capable than either the PS3 or the 360. Not because the raw power is so much higher, but the hardware is simply newer. The reason fanboys of all camps tend to compare just FLOP numbers or <x> times more powerful, is because for generations this is how console manufacturers sold their hardware.
The megadrive is 16 BITS! or has BLAST PROCCESSING!
The Dreamcast was 15 times as powerful as a PSone and 10 times as powerful as a N64 (only when counting certain numbers.)
The Gamecube was called 256 bits for a while
....
And the further we casually stray from Sony's 2560 (Frame buffer bus width of the PS2
) bits claim for the PS2 the better
.
It was however not things like the amount of bits in the Addressing space or the amount of MIPS that put these consoles in a new generation. All of these consoles also offered new techniques and instruction sets to tackle problems. Which set them apart from the previous generation. With the SNES you could rotate a background by feeding integer values into a 2 by 2 matrix. The N64 allowed for texture filtering and perspective correct texturing.
And so does the Wii U over some interesting techniques that the 360 and PS3 can only emulate at best.
EDRam:
The size of the EDram for the Wii U is actually very nicely chosen. Very often developers ran into problems with the 360's considering it's size. Especially when deferred rendering started to become common, they had often wished it was 16 megs rather than 10. For a 720p framebuffer 32megs is more than plenty. It also however holds some other neat tricks up it's sleeve. One of which is that the Wii U GPU can process Integer values, which are very handy and quite compact. You can fit a lot of those in 32megs and since the bandwidth is high with very low latency, you can also do some highly iterative code quite fast over many of them. It is a trick the 360 could only emulate via Memexport as it still had to go through the GPU to access it's EDRam.
Now another thing is that its situated on the same die. Rather than a daughter die, this is actually a lot more efficient. For instance MS her engineers had to slow down the bus of the 360's Edram when they finally where able to shrink it down to one chip. If not the newer chips would be faster than the old ones and some tightly timed code would get into problems.
You see on it's own the Espresso is not a very powerful CPU. Rather than one PowerPC G3 core, it now has three that are higher clocked and a few new instructions. However it has a neat trick up it's sleave, it can access the same EDRAM directly. This will allow it to use it as a scratchpad outside of it's already larger cache memory when compared to a Wii. This allows the CPU to actually hold up quite well.
Now I know I'm still quite vague, but Nintendo nor IBM is not sharing much with us (the gamers) regarding the capabilities of the GPU and CPU. We do know that the new Dev Kits are significantly faster than the old ones, and that the GPU probably has been underestimated. But to say that the Wii U is about the same "POWER" as the 360 or PS3? No... maybe if you only take in amounts of Floating Point Operations or Instructions Per Second? Sure... but there is a lot more to computing than that.
I really hope I kept it a bit short and as simple as I can. If I left anything out, or made a mistake, as I'm typing this on my break. Do point it out.