This blog series is a part of the write-up assignments of my Real-Time Game Rendering class in the Master of Entertainment Arts & Engineering program at University of Utah. The series will focus on C++, Direct3D 11 API and HLSL.

In this post, I will talk about how I added 2D sprite support to my rendering engine.

Sprite Representation

To display the 2D sprites on screen, we don’t really need an actual “mesh” to represent it. We can simply use a quad that has only 4 vertices to draw any 2D UI that we need. To put it differently, we only need one vertex array buffer and one vertex input layout to draw all the 2D UIs that we need! Below shows my singleton sprite class that will only be instanced once throughout the whole game (notice the static GetSprite() function).

cSrptieClass.PNG
Sprite class used to draw any UI
cSpriteGetSpriteFunction.PNG
Static getter for the sprite class

For the actual vertices data, I am assuming that when the values are passed into the shaders, they will be normalized to the projected space position. Therefore, I can just use R8G8_SNORM for vertices positions and R8G8_UNORM for texture UVs.

sSpriteVertexFormat.PNG
Sprite vertex representation, this struct is 4 bytes large
cSpriteInputLayoutDescription.PNG
Sprite class input layout

Inside the sprite class initialization function. We will need to initialize the vertex array buffer with some data. However, we have no idea what kinds/sizes/positions that the sprites of this game will have. Therefore, we need to make it general. One of the reasonable approaches would be setting it to fix the whole screen. Which would look like what is shown below because we are using SNORM and UNORM respectively for positions and UVs. We can manipulate the actual display positions/orientations/scales later on with some data inside our constant buffer.

cSpriteVertices.PNG
Sprite vertices initialized data

Sprite Object

Now we have a representation of a quad to draw our sprites on, but what textures are we using? What about positions and orientations? To link all these together, I created another sprite object class. This class holds a handle to a material, which contains the texture we need, with some other data such as the scale, the position, render order (layer), etc. Notice that the scale means the scale compared to the whole screen, and the position is in the projected space. This implies upward leakage from lower-level engine code to game code and can cause some other problems that will be discussed later.

cSpriteObjectPublicInterface.PNG
Sprite object class public interface
SpritesPositionsAndScales.PNG
Sprites positions and scales in the projected space
uint8ForSpriteOrder.PNG
Sprites render order is uint8_t

Sprite Shaders

Now let’s talk about the sprites shaders and how the engine code passes information to our graphics hardware. We already have a matrix for local to world transformation and another matrix for local to projected. However, when transforming UI, we don’t really care about the third dimension so using a full 4×4 matrix to transform is kind of a waste of calculation. Instead, we will only need 2×2 at most to perform the rotation operation. To achieve this, we can pass all the information needed from C++ to the shader codes.

Notice the seemingly arbitrary indices for the position values and the scaling values. They are set this way because D3D operates in column major. By setting them in those positions inside the matrix, we can conveniently access them in the shader code using something like g_transform_localToWorld[0].xy. More importantly, we can just pack one column into a 2×2 matrix to perform our rotation operation!

CppSettingPositionRotationScale.PNG
Passing the constant matrix data to GPU
// Scale and rotate rotate, and then translate
float2 newPos = Transform((float2x2)g_transform_localToWorld[1], i_vertexPosition_local * g_transform_localToWorld[0].zw) + g_transform_localToWorld[0].xy;

Render Command

Comparing to other meshes in the game scene. We know that the sprites objects will always be directly in front of the camera and cover any other objects in the world. Therefore, we can separate them into a different render command and just draw them after all of the meshes in the scenes are drawn. By doing this, we can turn off the depth testing and write to depth buffer for our sprites. This means that whichever sprite gets drawn last will be on the top of other sprites. To gain more control on this part, I pack the render order inside the sprite objects into the render commands and use them as the primary factor to sort the commands.

SpriteDrawRenderCommand.PNG
a new render command for sprites
EncodeSpriteRenderCommand.PNG
The data packed into sprite’s render command

If we take a look at the GPU timeline when rendering the sprites, we will see that we set the vertex buffer for our sprites before the first time that we are rendering sprites, and then we set the primitive to be triangle strips instead of triangles. After setting those correctly, we only need to call Draw(4, 0) and it should draw correctly.

GPUTimeline_Outline.png

Results

Below is the result of my new sprite system.

SpriteWithRotation.PNG
Sprites with different sizes and orientation

Problems

Since we’re directly setting the positions in projected space and store them inside the sprite object, the UI will get squeezed or stretched if the screen resolution changes. Therefore, we need to come up with some methods to address this issue.

SuperWideScreen.PNG
Super wide squeezes the UI vertically
SuperNarrowScreen.PNG
Super narrow resolution squeezes the UI horizontally

Resolution Independent Sprite Size

Instead of directly using a ratio comparing to the screen size, we can use a fixed desired size in the sprite object to calculate the scale according to current screen resolution. By doing it this way, we can maintain fixed sizes for the sprites.

NewSpriteScaleNormal.PNG
Normal resolution
NewSpriteScaleSuperWide.PNG
Super wide resolution
NewSpriteScaleSuperTall.PNG
Super tall resolution