The playable game, corresponding with this post, can be found online here.
And the source code for this part can be found on GitHub here.
So far we’ve got a basic level that you can move around, and doors work, but its pretty empty - there are no guards, no health items, and no treasure. In this part we’re going to start adding these things. I’m going to refer to them collectively as game objects.
Loading the game objects
This isn’t particularly hard but there is a good amount of grunt work to be done most of which can be found in the excitingly named GameObjectFactory.cs. Game objects are represented by a single number in a 64x64 array with each number referring to an enemy of a given type with a given set of properties - for example which way they are facing. Some enemies only appear at a certain difficulty level and the state of the enemy (standing, on a path) is also captured.
As you can probably imagine by now this just involves a lot of tedious “if gameObjectValue = value then create this kind of object” code. Its rivetting!
I used the DOS based Wolfenstein editor MapEd, running in DosBox on my Mac, to figure out what all these numbers mean as it handily shows the details.
The end result of all this is an array of game objects which I’ve modelled in a simple set of records that capture the properies for the two fundamental types of game object: static and enemy. Static objects are things like ammo pick ups, treasure, guns etc. They have a core set of properties (pickupable, restore health, ammo etc.) which are also shared and expanded on by enemies that have proeprties like a direction, an AI state, health.
In total its a large number of properties and deriving the enemy record from the static record results in a truly massive and unwieldy constructor so I modelled it slightly differently:
public abstract record AbstractGameObject(BasicGameObjectProperties CommonProperties);
public record StaticGameObject(BasicGameObjectProperties CommonProperties) : AbstractGameObject(CommonProperties);
public record EnemyGameObject(BasicGameObjectProperties CommonProperties, EnemyProperties EnemyProperties)
: AbstractGameObject(CommonProperties)
I’m still not sure what I think about this approach. I might change it later.
Rendering game objects
With that out the way its time to look at rendering the game objects which, fortunately, is more interesting. Most of the code that the explanation below covers can be found in ObjectRenderer.cs.
The game objects in Wolfenstein are actually simple sprites / bitmaps. There’s nothing 3D about them. Some objects, like guards, have images for multiple orientations for every frame of animation. To render them on screen we have to do the following things:
- Work out their position on the camera plane / viewport (and of course they may not be on the viewport).
- If the sprite has multiple orientations then work out the correct frame to show based on the direction of the sprite (e.g. which way a guard is facing) and the players position (e.g. if the guard is facing north and the player is behind them and facing north then we need to show the rear of the guard).
- For each column of the sprite that is on the viewport compare its distance from the player with the z-index of the wall that has been rendered in the same column.
- If the sprite is nearer to the player than the wall then draw that column scaled based on the distance.
Simple right! To be fair its not that complicated in practice other than step (2) which vexed me when I first approached it.
Projection onto camera plane / viewport
I’m going to gloss over step (1) somewhat as, like rendering the walls, its pretty standard camera stuff but you can see it in the first 15 lines or so of the RenderObject method. (I may come back at some point and go through how projection works if enough people are interested - but I’m not a math guy and I’m not sure I really have much to add to what is already out their, this book covers it pretty well).
Sprite orientations
Figuring out the correct orientation caused me some trouble when I first approached this and in the process of implementing this I came across the concept of barycentric coordinates.
Basically given a frame of animation (the rows in the screenshot below) we need to choose the right sprite for the orientation (the columns) of the guards direction relative to the players position. Their is a sprite offset for each frame and then any sprites for the orientation. The image below shows the standing series of sprites and the running sprites at their different orientations along with their offsets:
My hair brained scheme (I’m sure their is a better way) was and is to basically lay out a series of 8 triangles (one for each orientation N, NE, E, SE, S, SW, W, NW) using vectors with a length of 1 and then the vector from the player to the game object and make it 0.5 in length thus guaranteeing that if use the vector as a point it will fall within one of our triangles and we can use the index of the sprite it lands in to select the correctly oriented sprite.
I specify the triangles using two vectors (we cap them later) with the following code:
var trigCompatibleDirectionVector = directionVector with {Y = directionVector.Y * -1};
const int spriteQuadrants = 8;
var quadrantSize = (360.0 / spriteQuadrants).ToRadians();
var playerRelativePosition =
new Vector2D(
game.Camera.Position.X - ego.CommonProperties.Position.X,
ego.CommonProperties.Position.Y - game.Camera.Position.Y).Normalize();
(ego.CommonProperties.Position - game.Camera.Position).Normalize();
var vectors = Enumerable
.Range(0, spriteQuadrants)
.Select(quadrant =>
{
var centerAngle = quadrant * quadrantSize;
var startAngle = centerAngle - quadrantSize / 2.0;
var endAngle = centerAngle + quadrantSize / 2.0;
var startVector = trigCompatibleDirectionVector.Rotate(startAngle);
var endVector = trigCompatibleDirectionVector.Rotate(endAngle);
return (quadrant, startVector, endVector);
}
)
.ToImmutableArray();
And then “hit detection” is done using barycentric co-ordinates, as I linked to earlier:
public static bool IsPointInTriangle(Vector2D p1, Vector2D p2, Vector2D p3, Vector2D testPoint)
{
// barycentric coordinate approach
// https://stackoverflow.com/questions/40959754/c-sharp-is-the-point-in-triangle
var a =
((p2.Y - p3.Y) * (testPoint.X - p3.X) + (p3.X - p2.X) * (testPoint.Y - p3.Y)) /
((p2.Y - p3.Y) * (p1.X - p3.X) + (p3.X - p2.X) * (p1.Y - p3.Y));
var b =
((p3.Y - p1.Y) * (testPoint.X - p3.X) + (p1.X - p3.X) * (testPoint.Y - p3.Y)) /
((p2.Y - p3.Y) * (p1.X - p3.X) + (p3.X - p2.X) * (p1.Y - p3.Y));
var c = 1.0 - a - b;
return a >= 0.0 && a <= 1.0 && b >= 0.0 && b <= 1.0 && c >= 0.0 && c <= 1.0;
}
Honestly this bit hurt my brain the most. I’m not the most “visually brained” of people and trying to picture this (particularly the relationship between the orientation and the players position) was difficult. In the end I resorted a graph paper diagram of things and once I realised I was drawing lines it all started to make a lot more sense.
Handling depth
If you recall during our wall rendering we were tracking a bunch of information as we rendered. W’ve already made use of some of this information in the previous part however the thing we tracked but didn’t use was a z index array. For each column of the viewport we stored the depth of the wall that was rendered using the perpendicular distance from the player:
wallRenderResult = wallRenderResult with
{
ZIndexes = wallRenderResult.ZIndexes.Add(perpendicularWallDistance),
WallInFrontOfPlayer =
viewportX == viewportSize.width / 2
? (rayCastResult.MapHit.x, rayCastResult.MapHit.y)
: wallRenderResult.WallInFrontOfPlayer,
DistanceToWallInFrontOfPlayer =
viewportX == viewportSize.width / 2
? perpendicularWallDistance
: wallRenderResult.DistanceToWallInFrontOfPlayer,
IsDoorInFrontOfPlayer =
viewportX == viewportSize.width / 2
? cellHit switch {Door => true, _ => false}
: wallRenderResult.IsDoorInFrontOfPlayer
};
We render the sprites in a similar fashion: column wise and (as you’ve probably figured out) we compare the perpendicular distance of the game object from the player with the distance of the wall stored in the z-index array. If the game object is nearer we draw that column of the sprite, if its further away we don’t. This allows sprites to be perfectly clipped by corners, opening/closing doors etc.
To handle the depth of multiple sprites in the viewport (one may be in front of the other) we simpyl order the game object array by its distance from the player with a slight optimisation - we don’t actually take the square root needed to calculate the actual distance using Pythagoras. We don’t need to - we’re just interested in the relative distances.
The actual render
With all that work done the actual rendering is pretty simple and almost identical to the wall rendering code:
for (var stripe = drawStartX; stripe < drawEndX; stripe++)
{
if (transformY > 0.0 && stripe > 0 && stripe < viewportSize.width && transformY < zIndexes[stripe])
{
var textureX =
(int) (256.0 * (stripe - (-spriteWidth / 2.0 + spriteScreenX)) * 64.0 / spriteWidth) / 256;
for (int y = drawStartY; y < drawEndY; y++)
{
var texY = (int) ((y - viewportSize.height / 2.0 + lineHeight / 2.0) * step);
var color = *(srcPtr + texY * spriteTexture.Width + textureX);
if (!Pixel.IsTransparent(color))
{
*(destPtr + y * viewportSize.width + stripe) = color;
}
}
}
}
The only significant difference is that sprites can be transparent whereas the walls we’ve rendered so far are fully opaque and so their is a simple “is transparent” check before we render the color.
Next Steps
Next is AI, and I’ll probably also add on a desktop build target.
If you want to discuss this or have any questions then the best place is on GitHub. I’m only really using Twitter to post updates to my blog these days.