Designing a 3D Rendering Library for .NET Core

In this second post, I’ll be exploring Veldrid, the library powering all of the 3D and 2D rendering in the game engine for Crazy Core. I’ll be discussing what the library does, why I built it, and how it works.

NOTE: A basic understanding of graphics APIs is recommended for some of the content discussed in this post. For beginners, I would suggest looking at the example code below to get a general idea of the concepts involved.

687474703a2f2f692e696d6775722e636f6d2f34546c6d5675682e706e67

One of the most obvious benefits of using a managed language runtime like .NET is that your program is immediately portable to any system which supports that runtime. This benefit disappears once you start using native libraries, or relying on other platform-specific functionality. How, then, do you design a hardware-accelerated 3D application which is able to run on a variety of operating systems and with a variety of graphics API’s? Well, you make an abstraction layer, and code against that! As with any programming abstraction, trade offs must be made very carefully in order to hide complexity while still maintaining a powerful and expressive programming model. With Veldrid, I had a few goals and non-goals:

Goals of Veldrid

  • Allow you to write abstract code which does not bind to any particular graphics API. Provide concrete implementations for Direct3D 11 and OpenGL 3+.
  • Follow usual graphics API patterns. Veldrid does not invent its own notation or quirkiness (graphics API’s have enough of their own).
  • Be fast. Don’t impose tons of unnecessary overhead. Encourage patterns that don’t allocate memory during the normal rendering loop and allocate minimal memory otherwise.

Non-Goals of Veldrid

  • Allow you to program 3D graphics without knowing 3D graphics concepts. Veldrid’s interface is slightly more abstract than concrete API’s like OpenGL or D3D, but the same concepts are exposed.
  • Expose all of the features of individual APIs. Concepts exposed through Veldrid should be expressable with all backends; nothing should throw a NotSupportedException without very good reason. Different performance characteristics for the same concepts are OK and expected (within reason), as long as the behavior is not observably different.

Feature Set

  • Programmable vertex, fragment, and geometry shaders
  • Vertex and index buffers, including multiple input vertex buffers
  • A flexible material system, with vertex layout and shader variable management
  • Indexed and instanced rendering
  • Customizable blend, depth-stencil, and rasterizer states
  • Customizable framebuffers and render targets
  • 2D and cubemap textures

Show Me The Code

Now that is all well and good, but what does a program using Veldrid actually look like? And more generally: what does it even mean to use an abstract rendering library? To help demonstrate, I created the aptly-named “Veldrid Tiny Demo“. Let’s take a walk through the code and see how it works. The full project is linked for those who would like to tinker with it. It uses the new MSBuild-based tooling for .NET Core, so building it is easy, fast, and foolproof.

Setting up a window


bool isWindows = RuntimeInformation.IsOSPlatform(OSPlatform.Windows);
OpenTKWindow window = new SameThreadWindow();
RenderContext rc;
if (isWindows && !args.Contains("opengl"))
{
rc = new D3DRenderContext(window)
}
else
{
rc = new OpenGLRenderContext(window);
}
window.Title = "Veldrid TinyDemo";

view raw

blog1.cs

hosted with ❤ by GitHub

tinydemo0

Wow, we made a blank window. Amazing! What’s this other stuff about a “RenderContext”, though? What are all these methods on it, and what the heck do I do with it? Simply put, a RenderContext is the core object representing your computer’s graphics device. It is the object that lets you create GPU resources, control device state, and perform low-level drawing operations.

Creating device resources

This demo renders a rotating 3D cube in the center of the screen. In order to do that, we need to create a few GPU resources first. In Veldrid, all graphics resources are created using a ResourceFactory, accessible from a RenderContext. These resources will look familiar to anyone who has written graphics code before. We need:

  • A vertex buffer containing the vertices of the cube mesh
  • An index buffer containing the indices of the cube mesh
  • A “material”, which is a compound object containing
    • A vertex shader and a fragment shader.
    • A description of the input layout of the vertex data.
    • A description of the global shader parameters used.


VertexBuffer vb = rc.ResourceFactory.CreateVertexBuffer(
Cube.Vertices,
new VertexDescriptor(VertexPositionColor.SizeInBytes, 2),
isDynamic:false);
IndexBuffer ib = rc.ResourceFactory.CreateIndexBuffer(
Cube.Indices,
isDynamic: false);

view raw

blog2.cs

hosted with ❤ by GitHub

A VertexBuffer is created which contains the simple 3D cube data contained in the static Cube class. An IndexBuffer is created containing the static index data for the cube mesh.


DynamicDataProvider<Matrix4x4> viewProjection = new DynamicDataProvider<Matrix4x4>();

view raw

blog3.cs

hosted with ❤ by GitHub

A DynamicDataProvider is a simple abstraction facilitating the transfer of data to global shader parameters. In this simple example, we only have two pieces of data that we need to send to the vertex shader: the camera’s view and projection matrices. I’ve combined these into a single Matrix4x4 for simplicity.


Material material = rc.ResourceFactory.CreateMaterial(rc,
"vertex", "fragment",
new MaterialVertexInput(VertexPositionColor.SizeInBytes,
new MaterialVertexInputElement(
"Position", VertexSemanticType.Position, VertexElementFormat.Float3),
new MaterialVertexInputElement(
"Color", VertexSemanticType.Color, VertexElementFormat.Float4)),
new MaterialInputs<MaterialGlobalInputElement>(
new MaterialGlobalInputElement(
"ViewProjectionMatrix", MaterialInputType.Matrix4x4, viewProjection)),
MaterialInputs<MaterialPerOjbectInputElement>.Empty,
MaterialTextureInputs.Empty);

view raw

blog0.cs

hosted with ❤ by GitHub

Arguably the most complicated part of the example, this creates the “material” object described above. There are several pieces of information needed to create this resource:

  • The names of the vertex and fragment shader. In this case, they are simply called “vertex” and “fragment”.
  • A description of each element of the vertex input data. Our cube has only two pieces of per-vertex data: a 3D position and a color.
  • A description of the global shader inputs. As mentioned above, we only have a single buffer which holds a combined view-projection matrix.

Drawing

Now that we have all of our GPU resources, we can draw something! In this demo, rendering happens in a very simple loop. The shader parameters are changed every iteration of the loop in order to give the cube a rotating appearance.


while (window.Exists)
{
InputSnapshot snapshot = window.GetInputSnapshot(); // Process window events.
rc.ClearBuffer(); // Clear the screen.
rc.SetViewport(0, 0, window.Width, window.Height); // Ensure the viewport covers the whole window, in case it was resized.
float timeFactor = Environment.TickCount / 1000f; // Get a rough time estimate.
viewProjection.Data =
// Create a rotated camera matrix based on the current time.
Matrix4x4.CreateLookAt(
new Vector3(2 * (float)Math.Sin(timeFactor), (float)Math.Sin(timeFactor), 2 * (float)Math.Cos(timeFactor)),
Vector3.Zero, // Always look at the world origin.
Vector3.UnitY)
// And combine it with a perspective projection matrix.
* Matrix4x4.CreatePerspectiveFieldOfView(1.05f, (float)window.Width / window.Height, .5f, 10f);
rc.SetVertexBuffer(vb); // Attach the cube vertex buffer.
rc.SetIndexBuffer(ib); // Attach the cube index buffer.
rc.SetMaterial(material); // Attach the material.
rc.DrawIndexedPrimitives(Cube.Indices.Length); // Draw the cube.
rc.SwapBuffers(); // Swap the back-buffer and present the scene to the window.
}

view raw

blog4.cs

hosted with ❤ by GitHub

First, the screen is cleared and the viewport is set to cover the whole screen. Earlier, I said that we would be rendering a “rotating 3D cube”. More accurately, though, the camera itself is rotating around a static cube sitting at the world origin. When “viewProjection.Data” is assigned to, the matrix value is propagated into the vertex shader’s “viewProjection” variable. We bind the three resources we created earlier to the RenderContext, call DrawIndexedPrimitives, and then swap the context’s back buffer, which presents the rendered scene to the window.

tinydemogif

An obvious thing to notice in the code above is that there is no mention of any concrete graphics API (with the exception of context creation). All of the example code will work and behave the same on both OpenGL and Direct3D. The full project is available at the project page on GitHub; I encourage you to download it and experiment!

Behind the Scenes

What happens during one of these calls? Let’s dig a little deeper with two examples.


VertexBuffer vb = rc.ResourceFactory.CreateVertexBuffer(
Cube.Vertices,
new VertexDescriptor(VertexPositionColor.SizeInBytes, 2),
isDynamic:false);

view raw

blog5.cs

hosted with ❤ by GitHub

People familiar with OpenGL will know that vertex buffers are stored in special objects called VBOs, and those familiar with Direct3D have used a generic “Buffer” to store lots of different things. When the OpenGL backend is asked to create a VertexBuffer, it does the work of creating a VBO for you, filling it with your vertex data, and storing auxiliary information about that buffer. The Direct3D backend does the same by creating and filling an ID3D11Buffer object.

“VertexBuffer” itself is an interface exposing operations useful for vertex buffers, like setting vertex data, retrieving it, and mapping the buffer into the CPU’s address space. The Direct3D11 and OpenGL backends each return their own derived version of a VertexBuffer, a D3DVertexBuffer or an OpenGLVertexBuffer, and their operations are implemented through specific calls into each of those graphics APIs. This same pattern is used for all of the graphics resources available in Veldrid.

The next example is from the main rendering loop:


rc.DrawIndexedPrimitives(Cube.Indices.Length); // Draw the cube.

view raw

blog6.cs

hosted with ❤ by GitHub

What, concretely, does this do? Let’s look at the code handling this for OpenGL:


public override void DrawIndexedPrimitives(int count, int startingIndex)
{
PreDrawCommand();
DrawElementsType elementsType = ((OpenGLIndexBuffer)IndexBuffer).ElementsType;
int indexSize = OpenGLFormats.GetIndexFormatSize(elementsType);
GL.DrawElements(_primitiveType, count, elementsType, new IntPtr(startingIndex * indexSize));
}

view raw

blog7.cs

hosted with ❤ by GitHub

DrawIndexedPrimitives is translated down into a single call to glDrawElements, and the parameters are pulled from state stored in the RenderContext (the primitive type), as well as from the currently-bound IndexBuffer (the format of the index data).

What does the Direct3D backend do?


public override void DrawIndexedPrimitives(int count, int startingIndex, int startingVertex)
{
_deviceContext.DrawIndexed(count, startingIndex, startingVertex);
}

view raw

blog8.cs

hosted with ❤ by GitHub

The call is simply translated into ID3D11DeviceContext::DrawIndexed. All other relevant state is already set when the Vertex and IndexBuffers are bound to the RenderContext.

One thing you will notice if you look through the code is that, while most of the graphics resources in Veldrid are returned and exchanged as interfaces, the code treats them as strongly-typed objects in each backend. The D3D backend, for example, always assumes that it will be passed a D3DVertexBuffer or a D3DShader. This means you will encounter catastrophic exceptions if you, for some reason, attempt to pass an OpenGLVertexBuffer to a D3DRenderContext. See my thoughts at the end of the post about this design decision.

What Worked Well, What Didn’t

How well did the library meet the goals that I set out to accomplish? These are the things that went reasonably well:

  • The API is cohesive and exposes a good feature set while remaining API-agnostic.
  • The concepts are similar enough that you can usually follow OpenGL or D3D tutorials and map the concepts pretty easily into Veldrid.
  • There are a minimal number of “API leaks” that need to be hacked around in the backend code. OpenGL and D3D are similar enough that I can paper over most differences without losing tons of functionality or speed.
    • Example: OpenGL requires depth testing to be (globally) disabled if a framebuffer is bound without a depth texture. D3D doesn’t seem to care about this, or handles it internally. Because of this, the OpenGL backend disables the global depth testing state when a depthless-framebuffer is bound, even if the currently-bound depth state should otherwise be enabled. This sort of problem does not leak through to the end-user of the library, but it does make an otherwise clean implementation a bit uglier.
  • Performance is good. This isn’t a “zero-cost abstraction”, but the abstraction is thin enough.
    • Individual backends are able to track GPU state and defer or elide calls that would have no effect. For example, if two objects that are rendered one after another use the same vertex data. then the second object’s calls to SetVertexBuffer() and SetIndexBuffer() will essentially be no-ops, avoiding costly GPU state changes.
    • OpenTK and SharpDX are both very good, thin, fast wrappers for the respective graphics APIs. There is minimal overhead for calling into them when it’s needed.
  • It’s trivial to switch between backends. The Veldrid RenderDemo supports switching between OpenGL and Direct3D at runtime (without a restart).

On the other hand, here’s a few of my top problems after using the library in quite a few of my projects:

  • There is no unification of shader code. You need to write both GLSL and HLSL code separately, and do so in a way that works the same way with the D3D and OpenGL backends. This means shaders need to expose the same inputs (uniforms/constant buffers), the same vertex layouts, the same texture inputs, etc. How do others handle this?
    • Unity, Xenko: These use a custom shader language. This is a clean solution, but monumentally more complex than what I’ve done.
    • MonoGame, Unreal: Automatic shader translation. The approach here is to translate a single shader language into many, as needed. This could be fairly simple, depending on how much obscure syntax you’re willing to accept.
  • Material specification is very verbose. The example from the Tiny Demo above shows how verbose it is to create a simple Material object. It is possible that all of the necessary information could be retrieved via shader reflection (with both OpenGL and D3D), but I’ve not done that.
  • There is no multi-threading support. OpenGL is notoriously hard (impossible?) to multi-thread, but the D3D11 backend could have been easily threaded with a redesigned API.
  • Resource creation is a unusual because constructors aren’t used. This would be hard to work around without a level of indirection in each object, or with a redesigned assembly architecture (see the final bullet point in “Ideas for Veldrid v2”).
  • There are some things that leaked into the API that should probably be put into another helper library. A cleaner design would only include very low-level concepts in the core library, with others built on top.

Ideas for “Veldrid v2”

The initial version of Veldrid has served me well, and I’ve learned a ton while making it. I’ve built up a pretty long list of improvements for a potential “v2” of the library.

The most obvious addition for the library is additional backend implementations. Ideally, a next-gen version of the library would support, at the very least, OpenGL ES and Vulkan alongside the existing D3D11 and OpenGL 3+ backends. Most importantly, this would give me the option to run on iOS and Android, which is currently not possible with D3D or “full” OpenGL. Realistically, this would be the most expensive feature to implement, but also the most impactful.

As I mentioned above, a glaring problem with the initial library is that it has no support for multi-threaded rendering. APIs like Vulkan have been explicitly designed to be used in multi-threaded applications, and it’s clear that threading is an important problem to tackle for a modern graphics library. To a lesser extent, even Direct3D11, which is already supported in Veldrid, has threading features that are going unused in my library. I have a suspicion that this feature would naturally fall out of a next-gen library designed around supporting Vulkan and other modern graphics APIs.

I’ve already mentioned the problems with Materials in the current version of Veldrid, and this is an area that obviously needs to be overhauled in v2. It’s hard to say what the improved version would look like without a design for the rest of the library, but at the very least it needs to be significantly less verbose and error-prone than the current version.

Since the above features will most likely require re-architecting large portions of the library, I think it would be interesting to re-think another core piece, namely the use of interfaces and abstract classes in the public API. Veldrid is a single assembly which contains multiple implementations of a single API-agnostic interface. This means you can decide at runtime, rather than deployment-time, whether you want to use Direct3D or OpenGL, and it also gives you the ability to switch APIs at runtime. On the other hand, the approach comes with a level of runtime overhead because of the interface and virtual call dispatch involved. Most other 3D graphics layers use compile-time specialization rather than runtime/interface specialization. I would like to explore whether an alternative approach could be used, involving the “bait-and-switch” technique used in some PCL projects. A custom AssemblyLoadContext could be used to load a particular version of Veldrid.dll which used a specific graphics API. This would allow you to retain the flexibility of the current approach, without the overhead of interface or virtual dispatch.


687474703a2f2f692e696d6775722e636f6d2f37307936734a712e676966

Veldrid is an open-source project available on my GitHub page. It uses the new MSBuild-based .NET Core tooling and can be used from any projects targeting .NET Standard 1.5 or above.

Thanks for reading! In subsequent posts, I’ll look at more practical applications of Veldrid in my game engine. In the meantime, if anyone is developing a similar library, or would like to share some tips about the design of an abstract renderer, please leave a comment below, message me on twitter, or file an issue on Veldrid’s GitHub page.

8 thoughts on “Designing a 3D Rendering Library for .NET Core

  1. Very cool stuff! I was wondering what your solution was for shaders in CrazyCore. From an engine usability standpoint it would be great to write it once and have it work everywhere. Unfortunately, the more supported targets (OpenGL ES, Vulkan, DX12?) the more work needed on the engine side to translate.

    After all that, you still need to solve cross-platform input (touch, mouse, keys, controllers, gestures) and optimized assets (shaders, texture formats, audio formats, etc). All very solvable problems but I found it’s a lot of work!

    My previous engine iteration was going for a web target via WebGL through transpiling C# to Javascript. The graphics abstraction from OpenGL Core 3.1 to ES was pretty smooth. However, I found myself abstracting quite a lot of things on top of it. I found the expectations of a web game were that you should be able to jump right in and display your scene as fast as possible without waiting for every asset to load (aka streaming assets). That caused a series of refactors and the “what else should be abstracted” questions which sent me down the path of realizing the code was getting too complex. Testing all the edge cases and writing all the implementations was taking me farther away from actually using the engine to make a game so I decided to cut the platform and focus on saving the engine users time (ie features) rather than getting them more places/targets.

    What you’re doing is pretty incredible for one person. Having the ability to switch between DirectX and OpenGL at runtime is pretty neat. Having a single API to write a game against and be able to swap out the implementation as technology changes is really ‘the dream’. The last game engine you’ll need. It’s a huge undertaking and I hope to see what you learn from v2.

    Like

    1. Hey PurelySimple, thanks for reading and thanks for the comment!

      > From an engine usability standpoint it would be great to write it once and have it work everywhere. Unfortunately, the more supported targets (OpenGL ES, Vulkan, DX12?) the more work needed on the engine side to translate.

      Yes, you’re definitely right. I think I could have easily done auto-translation since I only have two targets right now, but it will become harder and harder as I add more backends. On the bright side, I found that forcing myself to write both versions of the shaders was a good way to learn. Not great for productivity of course, but good for learning.

      I’ve also struggled with limiting the amount of time I spent on “engine features” vs. “game features”. At some point I decided to limit myself to bug fixes in the engine and dedicate the rest of the time to real functionality in the game.

      Like

  2. Thanks for sharing your engine design! One thing I thought I’d mention was that while multi-threading the rendering itself is pretty much impossible with OpenGL, it is relatively easy to design the engine such that a second thread can be used to do all the culling and rendering in parallel with the main update thread. As scenes get more complicated, this can result in a significant performance improvement.

    I’m working on a C# .Net Core based engine now using Vulkan. It’s going pretty well so far, but it’s definitely different than older DirectX and OpenGL. I look forward to seeing your future articles!

    Like

    1. Thanks for reading! You have a good suggestion for multi-threading that wouldn’t require changing too much of the rendering code to accomplish. Ultimately I think that I will try an approach where simple “command lists” (of sort) are built in parallel (by many threads) and a single render thread actually converts those into OpenGL calls. That would allow the more expensive parts to be threaded and only the final steps would need to be submitted from a single thread.

      I’m very interested in hearing about your Vulkan engine. Let me know if you have anything to share!

      Like

      1. In our previous engine which had room and portal culling and software-based occlusion culling, the total ‘cull’ times could be fairly high, so we’d end up with a nice balance between our update-ai-physics-thread and cull-draw-thread. I think BitSquid has a similar setup, except they use a command list (messages basically) to update the draw-thread state vs. having a sync-point at the end of the frame to copy the latest update state changes into the draw state (which can also be multi-threaded in many cases). With Vulkan, I’m hoping to extend this to use more threads/tasks in the draw thread to generate command buffers in parallel as well (for lights/shadows/etc).

        I’m in the process of setting up a blog documenting my “fun” times with Vulkan, so I’ll leave a message once that’s up.

        Now if only I could get the macros/environment variables like $(ProjectDir) working in .Net Core Pre-Build events in VS2017… 🙂

        Like

  3. What wound be the feasibility of using your library to create a 2d cross platform UI controls?

    I have a lot of existing code for a set of controls, just need a way to re-work the rendering for core, and some form events for mouse/pointer interaction.

    Any thoughts or ideas would be appreciated…

    Thanks,
    Eric

    Like

    1. Hey Eric,

      It’s certainly feasible to build 2D UI controls on top of this. However, the difficulty / complexity would be roughly on-par with doing it directly on top of OpenGL or Direct3D. Realistically, you would probably want to build a simple layer on top of this that let you emit primitive lines, shapes, fonts, etc. That would be an appropriate layer to build UI controls on top of.

      As for windowing and input management: I would probably just recommend you use SDL2. There aren’t great windowing/input libraries for .NET that I can recommend, so I’ve started using SDL2 directly. I have a wrapper that I use here: here.

      Like

Leave a comment