Here is a bit of an explanation for how the demo works.
The level structure is generated into a 2D grid structure by recursively splitting the available area into small rooms. It is similar to the method described
in this article. There is a random chance that walls between neighbouring rooms will be removed and large rooms may be cut out in the centre to create a ring shape.
For each room, a set of walls are calculated which is where the edge between an empty cell and a wall cell exist. For each edge that connects two neighbouring rooms, a 'portal' is created. The vertex end points for each of the walls / portals are stored in a list, and vertices can be shared between connected walls.
This is how the visibility algorithm works:
The renderer has a queue of rooms to draw. It first finds which room the camera is in and adds it to the queue. It can do this quickly because the 2D grid structure stores which room each cell is associated with, so this can be deduced based on the cell that the camera is in.
For each room in the queue:
- Transform all the vertices into view space and render the walls (more details below on how this works)
- For each visible portal:
- if the connected room has not yet been rendered then add to the queue
Some extra data in the render queue stores the left and right clipping regions. Imagine a portal being the doorway into the next room, it will skip rendering that are not visible outside of the region of the doorway. There is also an optimisation that will limit how many portals deep the renderer will go.
For the actual wall rendering:
Walls are just lines which are stored as two 2D points (vertices).
- Each vertex is transformed from world space (x, y) to camera space (x, z), by translating relative to the camera and rotating by the camera's yaw.
- Each vertex in camera space can be thought of being in the space (x, z) where z is the depth from the camera.
- If both vertices are behind the front clipping plane then it is behind the camera and can be discarded.
- If one vertex is behind the clipping plane then it is projected on to the clipping plane, so that the clipping plane is 'cutting' the line.
- The x coordinate has perspective projection applied: x = K * x / z, where K is some constant
- The x coordinate needs to be transformed to screen (pixel) space: x = x + half_display_width
- For each vertex a w value is generated: w = wall_height * K / z, where K is the constant
- The w value is the scale of the wall in pixels at each end of the wall
- The w value between each end of the wall can be calculated by doing linear interpolation. I use something a bit like Bresenham's line algorithm to do this.
A few extra details:
- Walls are back face culled. This is simple: if the second vertex's screen space x value is to the left of the first vertex's screen space x value then you are looking at the back face of the wall.
- The wall rendering doesn't draw directly into the frame buffer. A 1D array of w values is stored first (wBuffer). It is the size of the display width. The closest (i.e. largest) w value is kept in the case of overlapping walls.
- For each value in the wBuffer, a precompiled scaler routine is called based on the scale of the wall. This is an optimised unrolled loop that draws to the frame buffer.
Hope this all makes sense!