Could we actually address the difference between PRT and TR please? Instead of bickering with someone who clearly has nothing to add to the discussion.
Ok I have some explaining to do. However I am merely a student here. If I make some mistakes please bear with me.
PRT or partially resident textures is a hardware interface to to mega-textures, sparse virtual textures or whatever you want to call it. It allows shaders to access arbitrarily large textures at render time. However at this time it only supports a single sample. It would be useful for large height maps, and the textures that cover the height fields, etc. However, if a developer wanted to multisample the texture they were calling (say mipmap, bilinear, trilinear, or anisotropoic filtering) they would have to manually request the samples and blend in their shader code. If a requested sample is outside the current tile, the software or driver would have to fetch the requested tile. If you look at the Tier 1 example code in the Microsoft SDK you can see how this works.
PRT tiles can not be render targets. In this regards it is useless as an acceleration function.
It is an incomplete solution in regards to the DirectX Tiled Resources API.
The DirectX Tiled Resources API goes much beyond the functions that PRT accomplishes. A Tiled Resource can be a render target in the TR API. When a shader renders a fragment into a Tiled Resource, the hardware or driver software, calls forth the tiles that need to be rendered to, and applies the shader to all the correct tiles.
This matters because In physically based lighting systems the single most computationally expensive function is calculating light sources, and light source occlusion. When a particular render frame in broken down into tiles, the active lights can be sorted and calculated in only the tile buckets that they influence(this happens in screen space). A tile that only has sky dome in it for example only needs to figure ambient light. A farm across a valley in a fantasy game only needs to figure for ambient, and sunlight. On the other hand the character model in a third person brawler is able to use Ambient, Sunlight SSAO, A subsurface scatter on skin materials. Perhaps some Cubic Reflection on metallic materials.
The render target that all this information is stored in is called a g-buffer. I don't totally understand these you should ask kb, as he understands g-buffers much better than I do. Anyhow the Tiled Resources api in DirectX 11.2 and 11.1+ manages this whole thing for the developer.
That this data structure is also perfectly sized to fit in the GPU cache, and is also thread safe is a real enhancement as well. The only hardware that seems designed to move these tiles around is the DMA hardware in the X1. Every thing here can be accomplished in software on the PC and other game platforms as well. In fact some games on the X360 already use techniques like this. The X1 seems designed to move these Tiles around. It has some specialized processors that seem perfectly suitedthem. Even keep them compressed in jpeg format until they are needed.
Turn up your volume and press play on video below now.
#balance conspiracytheory
Also... 32 megabytes is the perfect quantity to address 8 gigabytes into 256 byte tiles. A 256 byte tile would fit nicely into every cache in every computation unit on the x1. The CPUs, GPUs, the audio dsp. It is almost if it was all planned that way.