# FTJ2014 Post-mortem Part II – Animation

Unlike rendering, I went into this jam with no clue how to do animation. Fortunately, it turns out to be fairly simple (in part thanks to my per-entity rendering setup.)

First, a brief bit of background, for anyone unfamiliar with rendering:

Each vertex drawn has three properties: position, color, and texture coordinates. The rest of the pixels draw are determined by linear interpolation between connected vertices. Which vertices are connected depends on the primitive type defined when we send the vertex buffer to the GPU. In Majo, everything is rendered as Quads (4 consecutive vertices create a single face.) Finally, by convention, the elements of the position vector are referred to by the familiar (x, y, z), but texture coordinates are labeled (u, v, w).

So, the following code renders a 16×16 square starting at the origin (top-left of screen), textured with a 32×32 block of the current texture, starting at the top-left of the texture image:

```void Foo::render(sf::Vertex * quad) const
{
quad[0].position = sf::Vector2f(0, 0);
quad[1].position = sf::Vector2f(16, 0);
quad[2].position = sf::Vector2f(16, 16);
quad[3].position = sf::Vector2f(0, 16);

quad[0].texCoords = sf::Vector2f(0, 0);
quad[1].texCoords = sf::Vector2f(32, 0);
quad[2].texCoords = sf::Vector2f(32, 32);
quad[3].texCoords = sf::Vector2f(0, 32);
}
```

(There’s no requirement for texture size and actual rendered size to be related, but direct multiples prevent squashing and stretching.)

What this means is we can place multiple sprites on a single texture image (a spritesheet), and slice out the portion we want to render. Which means that *animating* these sprites is just a matter of changing which slice of the spritesheet we render.

A single frame of animation, then, is simply a rectangle indicating what part of the spritesheet to render:

```struct AnimationFrame
{
sf::Vector2f uv;  // Offset into the texture.
sf::Vector2f spriteSize;  // Size of the sprite on the texture.
};
```

And an animation consists of a list of frames, with some bookkeeping regarding when to switch:

```class Animation
{
private:
/* Container for a list of animation frames. */
std::vector<const AnimationFrame> m_frames;
int m_currentFrame;

int m_ticksPerFrame;
int m_ticksSinceLastFrame;

public:
// Constructors / Accessors elided.

/* Start the animation. */
void start();
/* Update call.  Advances the animation. */
void update();
/*
* Render the texture coords to quad.
* Does not touch position -- that is to be set by the entity.
*/
void render(sf::Vertex * quad) const;
};
```

The render method allows the owning entity to pass a quad to render the appropriate uv coordinates (while still rendering position and color itself.)

We’re not done yet!

The problem is that this class is hefty: a four-frame animation weighs in at 76 bytes, plus overhead incurred by the vector class. Copying this class as a member of every Entity that uses an animation adds up to a lot of memory moving around. (The issue is less the memory usage, and more that allocating, copying, and deallocating that memory is not free.)

The obvious solution is to cache the Animation, and let each entity store a pointer to it. However, we now have a different problem: the timing information and current frame are shared between all entities using a given animation. (Since each entity calls Animation::update, this leads to undesired behavior.)

This leads to the core lesson I learned during this jam: data and state are different things. Separate them.

Our animation frames are data: we load them into memory once, and only read from them. On the other hand, the timing information is *state*, modified on each update. These are two separate concepts, so let’s carve out the stateful bits into their own structure:

```struct AnimationState
{
int frameCount;
int currentFrame;
int ticksPerFrame;
int ticksSinceLast;

// Methods elided
};
```

With a bit of trivial setup, each animated Entity now owns an AnimationState, as well as a pointer to an Animation (20 + 4 bytes per Entity on x86). Now we can cache and share Animations all we want, as they are constant data. Animation::render has its signature changed to accept an AnimationState passed in: `void render(sf::Vertex * quad, const AnimationState& state) const;`

Over 1000 entities, this system saves ~54.6kB of memory. That’s not a lot. However, by enabling caching of Animations, we reduce the amount of initialization code involved, as well as the number of runtime copies/moves neccessary. Decreased memory usage is just a nice bonus.

Next time: design, scope, and some miscellaneous issues encountered over the week.

# FTJ2014 Post-mortem I – Rendering

Fuck This Jam 2014 has wrapped up, so I can finally take the time to pop stack and talk about what I’ve learned in the process. Together with a former classmate, I built “Majo no Mahouki” [link], a side-scrolling shoot ’em up. The game itself leaves much to be desired (as to be expected from a 7-day jam), but I think some of the underlying framework code is worth discussing.

The game is built in C++ on top of the SFML library. SFML provides (among other things) a 2D graphics abstraction over OpenGL. The core interface to this abstraction is `sf::RenderTarget::draw`, which accepts a buffer of vertices and a `RenderState` (pointing to textures, shaders, etc.) Every call to `draw` results in a GL draw call. As such, these calls are *not* cheap.

However, the overhead is more or less the same between 4 vertices and 4000. Therefore, the solution is obvious: batching!

Majo handles this by creating a single VertexArray in the Level object (which handles the actual rendering for all in-game sprites). Each Entity, then, has a method with the signature `virtual void render(sf::Vertex * quad) const`. The level hands each entity a pointer to a quad in its vertex array, and the entity renders itself to it.

An example (default static sprite rendering):

```void Entity::render(sf::Vertex * quad) const
{
const sf::Vector2f position = m_collider.getPosition();

quad[0].position = sf::Vector2f(position.x - halfsize.x, position.y - halfsize.y);
quad[1].position = sf::Vector2f(position.x + halfsize.x, position.y - halfsize.y);
quad[2].position = sf::Vector2f(position.x + halfsize.x, position.y + halfsize.y);
quad[3].position = sf::Vector2f(position.x - halfsize.x, position.y + halfsize.y);

quad[0].texCoords = sf::Vector2f(m_spriteBounds.left, m_spriteBounds.top);
quad[1].texCoords = sf::Vector2f(m_spriteBounds.left + m_spriteBounds.width, m_spriteBounds.top);
quad[2].texCoords = sf::Vector2f(m_spriteBounds.left + m_spriteBounds.width, m_spriteBounds.top + m_spriteBounds.height);
quad[3].texCoords = sf::Vector2f(m_spriteBounds.left, m_spriteBounds.top + m_spriteBounds.height);
}
```

This convention has issues, obviously: Level assumes that each entity only renders 4 vertices, and Entity assumes it is passed 4 vertices to render to. In hindsight, passing a reference to the array and an index into it may be a better calling convention.

In any case, we now have our vertices set up. However, this is not enough. While the vertices define position and texture coordinates, they do *not* define what texture is actually used (nor shaders, nor transforms, though Majo uses neither.) For proper batching, we must sort the list of entities by texture before rendering them to the vertex array:

```void Level::render()
{
m_projectileCount = 0;
m_enemyCount = 0;

m_verts.clear();
m_verts.resize(m_entities.size() * 4);

// Sort by texture type -- we want to batch drawing by texture.
std::sort(m_entities.begin(), m_entities.end(),
[](const Entity * a, const Entity * b) {
return a->getTextureId() < b->getTextureId();
});

// Scan the list, rendering to vertex buffer and
// recording counts of each texture type.
for (int i = 0; i < m_entities.size(); ++i)
{
const TextureId texId = m_entities[i]->getTextureId();
if (texId == TEXTURE_PROJECTILES) m_projectileCount++;
else if (texId == TEXTURE_ENEMIES) m_enemyCount++;

m_entities[i]->render(&m_verts[i * 4]);
}
}
```

(A TextureId is Majo’s internal handle to a cached texture.)

Now Level has an array of vertices, and offsets within the array for each texture switch. (This particular method of recording the offset will not scale well, but Majo uses a total of 3 textures, so it works here.) The draw call, then, is straightforward:

```void Level::draw(sf::RenderTarget& target, sf::RenderStates states) const
{
if (m_entities.size() == 0) return;

const sf::Vertex * projectileVerts = &m_verts[0];
const sf::Vertex * enemyVerts = &m_verts[m_projectileCount * 4];
const sf::Vertex * playerVerts = &m_verts[(m_projectileCount + m_enemyCount) * 4];

states.texture = Resources::getTexture(TEXTURE_PROJECTILES);
target.draw(projectileVerts, m_projectileCount * 4, sf::Quads, states);

states.texture = Resources::getTexture(TEXTURE_ENEMIES);
target.draw(enemyVerts, m_enemyCount * 4, sf::Quads, states);

states.texture = Resources::getTexture(TEXTURE_PLAYER);
target.draw(playerVerts, 4, sf::Quads, states);

// Hacky UI drawing elided.
}
```

Thus, we can draw all Entities with only 3 calls to `RenderTarget::draw`. This is a fairly extreme example — while individually rendering entities is prohibitively expensive, it’s not necessarily worthwhile to batch *every* draw. However, it can be done.

Next time, we’ll explore how Majo handles sprite animation.

I’m currently working on a game prototype (using C++ and SFML) that involves, among other things, dragging roads on a tilemap. I had a very specific behavior in mind for dragging, which took some work to get right. I’m going to document my results here.

### # Plumbing

Before we can even start to worry about how the dragging works, we have to implement a dragging action. SFML exposes events for the mouse being clicked, released, and moved — these form the basis of our action.

```//... Inside main loop

sf::Event event;
while (window.pollEvent(event))
{
switch(event.type)
{
//... Other event handling
case sf::Event::MouseButtonPressed:
// Map global pixel location to a location relative to the window.
sf::Vector2f clickPos = window.mapPixelToCoords(
sf::Vector2i(event.mouseButton.x, event.mouseButton.y)
m_map->getView());

// Map this relative pixel location to a specfic tile.
sf::Vector2i tilePos = m_map->mapPixelToTile(clickPos);

// Tell the current tool where to start a drag.
m_currentTool->startDrag(tilePos);
break;
case sf::Event::MouseButtonReleased:
// End the active drag.
m_currentTool->endDrag();
break;
case sf::Event::MouseMoved:
sf::Vector2f mousePos = window.mapPixelToCoords(
sf::Vector2i(event.mouseMove.x, event.mouseMove.y)
m_map->getView());

sf::Vector2i tilePos = m_map->mapPixelToTile(mousePos);

// Update the current position of the drag.
m_currentTool->updateDrag(tilePos);
break;
}
}

//...
```

This is all fairly straightforward: map the mouse coordinates to a particular tile and pass off to the Tool class. We now have the groundwork in place:

```class Tool
{
private:
bool isDragging;
sf::Vector2i m_dragStart;
sf::Vector2i m_dragCurrent;
std::vector<sf::Vector2i> m_tiles;

void updatePath();  // <-- The magic happens here.

public:
void startDrag(const sf::Vector2i& pos)
{
if (m_isDragging) return;

m_isDragging = true;
m_tiles.push_back(pos);
m_dragStart = pos;
m_dragCurrent = pos;
}
void updateDrag(const sf::Vector2i& pos)
{
if (!m_isDragging) return;

m_dragCurrent = pos;
updatePath();
}
void endDrag()
{
if (!m_isDragging) return;
m_tiles.clear();
}
};
```

### # The Algorithm

Our goal is to produce a list of tile coordinates that connect `m_dragStart` and `m_dragCurrent`. We want a linear path, with an optional dogleg, i.e.:

```* * * * * *
* *
* *
* *```

Finally, if there is a diagonal segment, we want the tip to “rotate” as we extend the drag:

```* * *
* *
*
|
v
* * *
* *
* *
|
v
* * *
* *
* *
*```

The first draft of the algorithm succeeds at everything except rotating:

```void Tool::updatePath()
{
// Clear output vector.
m_tiles.clear();

sf::Vector2i curr = m_dragStart;

while (curr != m_dragCurrent) {
m_tiles.push_back(curr);

// Signed deltas
const int dx = curr.x - m_dragCurr.x;
const int dy = curr.y - m_dragCurr.y;

if (abs(dx) > abs(dy)) {
curr.x -= 1 * sign(dx);
}
else {
curr.y -= 1 * sign(dy);
}

}

m_tiles.push_back(curr);
}
```

(The `sign` function simply maps negative numbers to -1, positive numbers to 1, and 0 to 0.)

This is a very simple algorithm, but it doesn’t handle the diagonals exactly the way we’d like. The “rotation” of the diagonal end is determined by what branch is run when `abs(dx) == abs(dy)` — here, it’s the vertical shift, and as such the diagonal always ends with a vertical shift. This causes a “wiggling” motion when dragging diagonally, as the entire diagonal segment shifts back and forth to compensate. Not ideal.

The key is that as we extend either the vertical or horizontal delta for the entire path, we want the diagonal end to switch rotations. This leads to the following:

```void Tool::updatePath()
{
// Clear output vector.
m_tiles.clear();

sf::Vector2i curr = m_dragStart;

// Manhattan distance from m_dragStart to m_dragCurrent.
const int dist = abs(m_dragStart.y - m_dragCurrent.y) +
abs(m_dragStart.x - m_dragCurrent.x);
const bool odd = dist % 2 != 0;

while (curr != m_dragCurrent) {
m_tiles.push_back(curr);

// Signed deltas
const int dx = curr.x - m_dragCurr.x;
const int dy = curr.y - m_dragCurr.y;

if (abs(dx) > abs(dy)) {
curr.x -= sign(dx);
}
else if (abs(dx) < abs(dy)) {
curr.y -= sign(dy);
}
else {
if (odd) {
curr.x -= sign(dx);
}
else {
curr.y -= sign(dy);
}
}
}

m_tiles.push_back(curr);
}
```

In this modified version, we take the Manhattan distance between start and current tiles. Then, if this number is odd, we shift horizontally when the deltas are equivalent, and if it’s even we shift vertically. Which rotation is used for odd/even is arbitrary, it only matters that they are different. Now, as we drag along the diagonal, the rotation of the end tile changes as the mouse crosses the tile boundaries, preserving our previous path and eliminating the “wiggle”.