My First Triangle in DirectX 11

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

# My First Triangle in DirectX 11

Apr 2010

Now as I have my new graphics card I've started learning Direct3D 11. I've been doing much coding in DirectX 9 before. I also looked at the DirectX 10 API but gained not much practical experience in it. I'm very excited about how the new API looks like and the possibilities it creates. The library interface looks better organized, more object-oriented and clear. It makes extensive use of descriptors - same concept I liked so much in PhysX.

But at the same time I must admit it's more difficult to get started than it was in DirectX 9. You have to create more objects to setup basic framework that could render anything. The so called Fixed Function Pipeline doesn't exist anymore, so you HAVE to write shaders to render anything. Better organization of all data forces you to pass shader constants in buffers instead one-by-one, fill descriptors, create and use state objects (like ID3D11DepthStencilState) instead of changing render states one-by-one, create views for resources (like ID3D11ShaderResourceView for ID3D11Texture2D) instead of using them directly, compile shaders from HLSL source to bytecode and then create the shader object with separate call etc.

There are also big changes in math support. They didn't provide new D3DX Math with DX11. You can still use the old one, but now it's recommended to make use of new, portable (to Xbox 360) and highly optimized XNA Math library. It's pretty but can be difficult for beginners. For example, now there is one universal type - XMVECTOR - that can represent a vector, color, plane, quaternion and more. It must be always aligned to 16 bytes (because it uses SSE). I suppose it's not easy to understand concepts like vector loading, storing or swizzling, which can be new for many DirectX programmers.

Where do learn DirectX 11 from? It looks like there are not many valuable sources online yet. The website looks interesting, but it's just a blog with few pieces of code and the author tries to wrap everything into his own classes from the start, which makes no sense for me. The most valuable source of knowledge is the original documentation installed with DX SDK. It's far from being extensive because the chapter about DirectX 11 describes only new features, not everything about using DirectX like the documentation for version 9, but for somebody who already knows some graphics programming it should be OK.

What I want to show today is my first "Hello World" triangle made in Direct3D 11 and code that renders it. You can download whole source with project for Visual C++ 2008 from here: See also the code online: Dx11Test.cpp.

To code in DX11, you just need DirectX SDK - same as for DirectX 9 and 10, because it contains SDK for all of them. Programs that use DirectX 11 can run on Windows 7 as well as Vista, but not on XP. It's surprising that newest-generation graphics card is not needed - you can use DX11 API to code in different "feature levels" from D3D_FEATURE_LEVEL_9_1 to D3D_FEATURE_LEVEL_11_0, so you can write your code so it will run even on GPUs that support only Shader Model 2!

To start, I've created an empty window in WinAPI. It's similar to the one shown in every book about DirectX, so I won't go into details here. Next thing was to create a Direct3D device. It looks totally different from what we know from DX9. We will get three objects at time, all necessary for further code: ID3D11Device, ID3D11DeviceContext, IDXGISwapChain.

The old good Device is still the main object of the initialized Direct3D. It has lots of methods e.g. for creating different types of resources, like buffers or textures. But now much of its functionality is moved to another object, completely new to DX11 - a Device Context. The context exposes methods to set render states and draw actual geometry. It has been designed this way so you can create additional "deferred" contexts to "record" Command Lists with state changes and draw calls on background threads and then "play" them on the main thread - a great thing that was possible on consoles for some time but not on PC until now! But we will only use the main, "immediate" context here, created along with the device.

The Swap Chain isn’t anything new, but now we need to explicitly use it. The strange class name comes from "DXGI" (Microsoft DirectX Graphics Infrastructure) - a library introduced with DX10. It is an API that should be independent from particular DirectX version. Enumeration of adapters, display modes and sharing 2D bitmaps ("surfaces") is performed at this level.

To create these objects, I first initialize some descriptor structures. An array of D3D_FEATURE_LEVEL enumerations is required, as well as DXGI_SWAP_CHAIN_DESC structure describing desired parameters of the swap chain. Here we provide screen resolution and refresh rate (in an embedded DXGI_MODE_DESC structure), multisampling parameters (in DXGI_SAMPLE_DESC structure) and some other values. Then, after calling D3D11CreateDeviceAndSwapChain function, we get all three objects at time - the device, its immediate context and the swap chain. They need to be released at the end of the program with the Release method, as all COM objects used from DirectX API.

Next step is to retrieve ID3DX11Texture2D representing the back buffer with IDXGISwapChain::GetBuffer method and create a render target view for it (of ID3D11RenderTargetView class) with ID3D11Device::CreateRenderTargetView. You may know the term "view" if you learned some advanced SQL ;) It's simply an object that represents a way of looking at some resource from particular perspective. So for example if you create a ID3D11Texture2D you may create ID3D11ShaderResourceView for it to use it as a texture that can be sampled by a shader, as well as ID3D11RenderTargetView to be able to set is as a render target and thus render onto it.

Here we do the second, so after we have the Render Target View for the 2D Texture extracted from the Swap Chain, we set it as our main and only render target with ID3D11DeviceContext::OMSetRenderTargets. The "OM" prefix in this method's name comes from the new, simplified graphics pipeline model, where subsequent stages are called: Input Assembler (IA), Vertex Shader (VS), Hull Shader (HS), Tesselator Stage, Domain Shader (DS), Geometry Shader (GS), Stream Output (SO), Rasterizer Stage (RS), Pixel Shader (PS), Output Merger (OM) and separate Compute Shader (CS).

It's now also necessary to set a viewport, which will - in our case - span to entire back buffer. It's described by the D3D11_VIEWPORT structure and set with ID3D11DeviceContext::RSSetViewports method.

We need to create and initialize a vertex buffer in order to render any triangles. It's because there are no convenient functions like DrawPrimitiveUP in DX11, that would just take a pointer to some vertex data. Well, all in all it was a bad idea to use it because of its poor performance, just like the glVertex in OpenGL. So now every beginner will learn good coding habits from the very beginning ;)

We can design our own vertex structure just like in previous DirectX versions. I called mine MyVertex. To create a vertex buffer, we fill in the D3D11_BUFFER_DESC structure, providing several parameters like the buffer size in bytes and some flags. Flags have been simplified, so the old D3DPOOL and D3DUSAGE no longer exist. Now there is nothing like a "memory pool". Instead we provide a single D3D11_USAGE enumeration that describes most common usage patterns for all D3D resources. We also give D3D11_BIND_FLAG flags to tell if we are going to bind our resource as vertex buffer, render target, shader resource, stream output etc. D3D11_CPU_ACCESS_FLAG tells if we want to have read and/or write access to the resource from the CPU.

Before creating vertex buffer, I also fill the D3D11_SUBRESOURCE_DATA structure. It's a new and very useful mechanism introduced in DX10 that allows you to pass the initial data to the created resource. It's much more convenient than creating, locking, filling and unlocking a resource, like we had to do before (in DX9). By the way, the locking/unlocking is now called mapping/unmapping. A resource with D3D11_USAGE_IMMUTABLE must be initialized this way because its data cannot be changed after creation.

After we fill the D3D11_BUFFER_DESC and optional D3D11_SUBRESOURCE_DATA structures, we call ID3D11Device::CreateBuffer to create the vertex buffer and at the same time fill it with data passed from a pointer.

Next thing is to compile and create shaders. DX11 forces us to write shaders even if we want to draw one, simple, untextured triangle. Again, it makes the start very hard for beginners, but at the same time enforces good habits and good performance. Luckily we don't need to provide all types of shaders supported by DX11, and there are many of them: Vertex Shader (the old one), Hull Shader and Domain Shader (new in DX11, related to tesselation), Geometry Shader (existing since DX10, can generate new geometry), Pixel Shader (the old one) and Compute Shader (for general purpose computation). For us it's enough to write a VS and PS. Its code is similar to how it would look like in previous DirectX versions.

The compilation of shaders looks different though. We first call the D3DX11CompileFromFile function to load the HLSL source code from file and compile it. If the compilation succeeds, we get a ID3D10Blob with compiled shader code. It's just a buffer with some binary data existing in RAM. Next we make separate call to ID3D11Device::CreateVertexShader to create a real ID3D11VertexShader from this code.

Now only one object remain before we can render the long-awaited triangle. It's called Input Layout and it represents the description of our vertex structure. There is no FVF (Flexible Vertex Format) in DX11 so we can't just merge some bit flags to describe our vertices. We have to define an array of D3D11_INPUT_ELEMENT_DESC structures telling that our vertices have Position of type float[2] and Color of type unsigned. Then we call ID3D11Device::CreateInputLayout to create a ID3D11InputLayout object from it.

Finally we are ready to setup render states in the device context. There are not many of them. Most of these thousands of enums and flags have been removed with fixed pipeline. The remaining ones are grouped in state objects like ID3D11DepthStencilState, but we don't use them here. They remain NULL so the context uses some default settings. The only things we have to set in ID3D11DeviceContext are:

And finally we do the actual rendering, so every frame we call:

To be continued... ;)

Comments | #directx #rendering Share


[Stat] [STAT NO AD] [Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2019