Resources

A key concept in Direct3D 11 is the resource. A resource represents an object usable by the API. Α resource is generally either a texture or a buffer.

A resource is an object that allocates memory and other implementation-dependent prerequisites for use in the API. The data in resources can be updated both on the CPU and the GPU, depending upon what flags the resource was created with. Resource updates are always done on the device context, guaranteeing all commands that come before the resource update will have the old data, and all following commands to see the newly specified data instead. This is a process known as buffer renaming. The runtime will allocate a new block of memory for every resource update, and remove old copies only when all outstanding references are guaranteed to have finished.

Resources are generally not used directly in rendering. For rendering, resource views are used instead.

Examples

Below are several examples of problems we are aiming to solve, and detailing the approach of using resources and resource views to solve those problems.

Intermediate render target

Problem:

For a First Person Shooter game, we would like to create a camera monitor, which can display any other part of the level on it. The camera object is a model in the scene, and is not guaranteed to be right in front of us. The part of the level the camera is pointing toward is not considered static.

Solution:

Since the monitor is a model, we can simply make the screen itself a texture. Now our problem is reduced to drawing an environment to a resource usable as a texture.

For this purpose we need the following components:

  1. A single ID3D11Texture2D to store the texture data in
  2. A single ID3D11RenderTargetView referring to the texture we created before
  3. A single ID3D11ShaderResourceView similarly referring to the same texture

When rendering, we will first need to render the “camera view” to be displayed on the monitor, as we can not yet render the monitor without that information. So we will first bind the ID3D11RenderTargetView and render the geometry in the area of the camera. We will then switch the render target view to the local view where the player is standing, and render the scene around him. When rendering the monitor's screen geometry, we use the ID3D11ShaderResourceView of the camera render target instead of the shader resource view of a normal image texture to render. This will cause us to render the view of the camera on the screen, and the screen can be viewed in any angle while still looking as intended.

For improved quality, it is important to consider adding mipmaps to the texture, and generating mipmaps (ID3D11DeviceContext::GenerateMips) before rendering in order to reduce the effect of Moiré Patterns.

Cascaded shadow mapping

Problem:

For rendering an open-world scene, we would like to render shadows from a directional light, in our case the sun. A popular method for this is Cascaded Shadow Mapping, which we want to implement.

For cascaded shadow mapping, we want to create a texture with a certain number of “depth slices”: For every pixel, we determine the distance from the camera, and determine an array layer to store the depth value of the pixel. We then want to sample this texture as an array when rendering the actual scene, but want to be able to render depth to individual slices when rendering the depth buffers themselves.

Solution:

While the scope of this article does not include a complete explanation of cascaded shadow mapping, the requirements for the implementation of a basic implementation are as follows:

  1. 1 ID3D11Texture2D with ArraySize = N
  2. 1 ID3D11ShaderResourceView with FirstArraySlice = 0 and ArraySize = N
  3. N ID3D11DepthStencilViews with FirstArraySlice = [0...N] and ArraySize = 1

NOTE: The Texture2D will have to be created with an “UNKNOWN” format such as DXGI_FORMAT_R32_UNKNOWN or similar. The shader resource view will have to have a similar format ending with _FLOAT rather than _UNKNOWN, and the depth stencil views will have to replace the R32_UNKNOWN with a D32_FLOAT in this example. This is due to the way type matching is designed to work in Direct3D 11.

When rendering the cascaded shadow maps, each of the individual slices will be bound one-by-one and the objects possibly within the area covered by each slice will have to be rendered each time.

Views can also be created of the same type, but with a different format or with a subsection of the resource. For instance, in the case of Cascaded Shadow Mapping (CSM) it is necessary to create an ID3D11Texture2D as an array of N layers. (ArraySize = N) When sampling from this array in the pixel shader, we want to be able to dynamically for every pixel select a layer to sample from. When rendering to the shadow map, however, we want to be able to render to every single layer individually.

A texture is an N dimensional (where N is between 1 and 3) array of pixels that can be filtered by the rendering hardware for use in sampling. Any texture of any type can also be created as an array, where any texture type can contain multiple textures