A key concept in Direct3D 11 is the resource. A resource represents an object usable by the API. Α resource is generally either a texture or a buffer.
A resource is an object that allocates memory and other implementation-dependent prerequisites for use in the API. The data in resources can be updated both on the CPU and the GPU, depending upon what flags the resource was created with. Resource updates are always done on the device context, guaranteeing all commands that come before the resource update will have the old data, and all following commands to see the newly specified data instead. This is a process known as buffer renaming. The runtime will allocate a new block of memory for every resource update, and remove old copies only when all outstanding references are guaranteed to have finished.
Resources are generally not used directly in rendering. For rendering, resource views are used instead.
Below are several examples of problems we are aiming to solve, and detailing the approach of using resources and resource views to solve those problems.
Intermediate render target
For a First Person Shooter game, we would like to create a camera monitor, which can display any other part of the level on it. The camera object is a model in the scene, and is not guaranteed to be right in front of us. The part of the level the camera is pointing toward is not considered static.
Since the monitor is a model, we can simply make the screen itself a texture. Now our problem is reduced to drawing an environment to a resource usable as a texture.
For this purpose we need the following components:
- A single
ID3D11Texture2Dto store the texture data in
- A single
ID3D11RenderTargetViewreferring to the texture we created before
- A single
ID3D11ShaderResourceViewsimilarly referring to the same texture
ID3D11RenderTargetViewand render the geometry in the area of the camera. We will then switch the render target view to the local view where the player is standing, and render the scene around him. When rendering the monitor's screen geometry, we use the
ID3D11ShaderResourceViewof the camera render target instead of the shader resource view of a normal image texture to render. This will cause us to render the view of the camera on the screen, and the screen can be viewed in any angle while still looking as intended.
For improved quality, it is important to consider adding mipmaps to the texture, and generating mipmaps (
ID3D11DeviceContext::GenerateMips) before rendering in order to reduce the effect of Moiré Patterns.
Cascaded shadow mapping
For rendering an open-world scene, we would like to render shadows from a directional light, in our case the sun. A popular method for this is Cascaded Shadow Mapping, which we want to implement.
For cascaded shadow mapping, we want to create a texture with a certain number of “depth slices”: For every pixel, we determine the distance from the camera, and determine an array layer to store the depth value of the pixel. We then want to sample this texture as an array when rendering the actual scene, but want to be able to render depth to individual slices when rendering the depth buffers themselves.
While the scope of this article does not include a complete explanation of cascaded shadow mapping, the requirements for the implementation of a basic implementation are as follows:
ArraySize = N
FirstArraySlice = 0and
ArraySize = N
FirstArraySlice = [0...N]and
ArraySize = 1
DXGI_FORMAT_R32_UNKNOWNor similar. The shader resource view will have to have a similar format ending with
_UNKNOWN, and the depth stencil views will have to replace the
D32_FLOATin this example. This is due to the way type matching is designed to work in Direct3D 11.
When rendering the cascaded shadow maps, each of the individual slices will be bound one-by-one and the objects possibly within the area covered by each slice will have to be rendered each time.
Views can also be created of the same type, but with a different format or with a subsection of the resource. For instance, in the case of Cascaded Shadow Mapping (CSM) it is necessary to create an
ID3D11Texture2D as an array of N layers. (
ArraySize = N) When sampling from this array in the pixel shader, we want to be able to dynamically for every pixel select a layer to sample from. When rendering to the shadow map, however, we want to be able to render to every single layer individually.
A texture is an N dimensional (where N is between 1 and 3) array of pixels that can be filtered by the rendering hardware for use in sampling. Any texture of any type can also be created as an array, where any texture type can contain multiple textures