One the inputs of rendering via programmable shading on a modern graphics card is a collection of vertices associated with some per-vertex properties used in shader computations. When programming the GPU, this collection of vertices is commonly abstracted as a vertex buffer, which is essentially just a bag of bytes. The collection of vertices describes a primitive point, line, or triangle topology along with an optional index buffer (if triangle) that describes vertex sharing. Again, the abstractions for describing these topologies are very weak and essentially amount to flags and arrays, although we can informally describe to the vertex buffer plus its primitive topology description as a geometry. Each vertex first individually undergoes processing in a vertex shader, and then (optionally) per-primitive (point, line, or triangle) in a geometry shader where a new geometry can be specified with fewer or more vertices or even a completely different topology. After vertex and geometry shading, the position of each vertex must be specified. Finally, each pixel of the rasterized primitive is processed by a pixel shader to determine its color. This is a simplification, the GPU supports other features such as instancing, where we reuse the vertex buffer multiple times in a single rendering, and geometry shader stream out, where we save the result of vertex/geometry shading for later re-use.
Bling does a good job of abstracting vertices and pixels: in Bling shader code one value represents both the vertex and pixel, where the vertex position and pixel color are specified with respect to this value. Bling infers whether the code is referring to a vertex or pixel based on a simple dependency analysis: we are referring to a pixel whenever we refer to anything that is only available in the pixel shader, otherwise we are referring to a vertex and the computation result (if used in the pixel shader) will be interpolated based on the analogous results in the primitive’s other vertices. However, Bling’s vertex/pixel abstraction breaks down when dealing with geometry: the input of a a rendering is simply the vertex count and primitive data and Bling has no high-level geometry abstraction. Bling’s lack of a geometry abstraction prevents geometries from being manipulated, composed, and constructed at a high-level, and prevents geometry shaders from being expressed at all unless shader partitioning is expressed explicitly. And this is a pity because geometry shaders can do so many cool things such as dynamically forming fur, hair, feathers, sparkles, blobs, and so on. We definitely need to make manipulating geometries easier.
So here is my new realization: give geometries their own abstraction in Bling, defined as one of the following:
The benefit of this approach is immediately apparent: depending on our need we can define geometries mathematically or load them from a mesh. As properties aren’t included explicitly in the geometry, we can mix and match separate geometries as long as they share the same topology. We can then synthesize the index buffer (if needed) automatically, removing this burden from the user, and infer where geometry instancing occurs.
Unfortunately, vertex properties are more difficult to specify. Before, each vertex had one vertex ID and an optional instance ID, which we would use to compute the vertex’s position and pixel’s color. These could then be used as indexes into a table (in the case of a mesh) to lookup initial values for properties like position and normal, or they could be used in more rich computations; e.g., computing a per-pixel normal from the derivative of a parametric surface. At any rate, we inferred what properties to include in the vertex buffer in a way that allowed an expressed computation to span the CPU and GPU, which is a very good feature from an optimization and reuse perspective. Now if we allow geometries to be composed, duplicated, and transformed, vertex addressing obviously becomes much more difficult when inferring properties. There are few directions that we could go in to solve this problem:
Any solution that I come with has to play friendly with geometry shaders, which have the ability transform geometries based on dynamic information. I’ll get into this with my next post.