Blog

# How to design API of a library for Vulkan?

Fri
08
Feb 2019

In my previous blog post yesterday, I shared my thoughts on graphics APIs and libraries. Another problem that brought me to these thoughts is a question: How do you design an API for a library that implements a single algorithm, pass, or graphics effect, using Vulkan or DX12? It may seem trivial at first, like a task that just needs to be designed and implemented, but if you think about it more, it turns out to be a difficult issue. They are few software libraries like this in existence. I don’t mean here a complex library/framework/engine that “horizontally” wraps the entire graphics API and takes it to a higher level, like V-EZ, Nvidia Factor, or Google Filament. I mean just a small, “vertical”, plug-in library doing one thing, e.g. implementing ambient occlusion effect, efficient texture mipmap down-sampling, rendering UI, or simulating particle physics on the GPU. Such library needs to interact efficiently with the rest of the user’s code to be part of a large program or game. Vulkan Memory Allocator is also not a good example of this, because it only manages memory, implements no render passes, involves no shaders, and it interacts with a command buffer only in its part related to memory defragmentation.

I met this problem at my work. Later I also discussed it in details with my colleague. There are multiple questions to consider:

  • Who allocates the memory for buffers and textures/images? There are two kinds of those – input/output ones that interact with the user’s code and internal ones, used for intermediate results. Should the user create them? If so, we must inform him about the requirements of those resources, like size and minimum required set of VK_IMAGE_USAGE_ flags. If the library should allocate them internally, how should it do it? By just grabbing new pieces of VkDeviceMemory blocks? What if the user prefers all GPU memory to be allocated using a complex allocator of his choice, like Vulkan Memory Allocator?
  • Who allocates necessary descriptors? Like with memory resources, the library could just create its own internal VkDescriptorPool, but what if the user prefers all the descriptors to be allocated from his own descriptor pool?
  • Who does the barriers before and after render pass provided by the library, for all the resources involved? Should it be the user, or the library? Barriers are an obvious contact point between the user’s and library code, so there must be a way to issue them considering both source and destination image layout, pipeline stage, and access flags.
  • How to provide SPIR-V code of the shaders? Should the library load it from files internally, or is it the user’s responsibility to load them from whatever source he wants and pass pointers to their memory on library initialization? Maybe we could somehow bundle them into the library source, e.g. as large constant byte arrays defined in an auto-generated C++ code?
  • What if the user doesn’t want to link to Vulkan functions statically, but rather fetches pointers to its functions using vkGetDeviceProcAddr, possibly with help of something like volk? The library should be able to use those.
  • What if the user wants to pass custom CPU allocation callbacks (VkAllocationCallbacks) to all Vulkan functions? The library should be able to do it.

This is a problem similar to what we have with any C++ libraries. There is no consensus about the implementation of various basic facilities, like strings, containers, asserts, mutexes etc., so every major framework or game engine implements its own. Even something so simple as min/max function is defined is multiple places. It is defined once in <algorithm> header, but some developers don’t use STL. <Windows.h> provides its own, but these are defined as macros, so they break any other, unless you #define NOMINMAX before the include… A typical C++ nightmare. Smaller libraries are better just configurable or define their own everything, like the Vulkan Memory Allocator having its own assert, vector (can be switched to standard STL one), and 3 versions of read-write mutex.

All these issues make it easier for developers to just write a paper, describe their algorithm, possibly share a piece of code, pseudo-code or a shader, rather than provide ready to use library. This is a very bad situation. I hope that over time patterns emerge of how the API of a library implementing a single pass or effect using Vulkan/DX12 should look like. Recently my colleague shared an idea with me that if there was some higher-level API that would implement all these interactions between various parts (like resource allocation, image barriers) and we all commonly agreed on using it, then authoring libraries and stitching them together on top of it would be way easier. That’s another argument for the need of such new, higher-level graphics API.

Comments | #c++ #graphics #libraries #directx #vulkan #gpu Share

# Thoughts on graphics APIs and libraries

Thu
07
Feb 2019

Warning: This is a long rant. I’d like to share my personal thoughts and opinions on graphics APIs like Vulkan, Direct3D 12.

Some time ago I came up with a diagram showing how the graphics software technologies evolved over last decades – see my blog post “Lower-Level Graphics API - What Does It Mean?”. The new graphics APIs (Direct3D 12, Vulkan, Metal) are not only a clean start, so they abandon all the legacy garbage going back to ‘90s (like glVertex), but they also take graphics programming to a new level. It is a lower level – they are more explicit, closer to the hardware, and better match how modern GPUs work. At least that’s the idea. It means simpler, more efficient, and less error-prone drivers. But they don’t make the game or engine programming simpler. Quite the opposite – more responsibilities are now moved to engine developers (e.g. memory management/allocation). Overall, it is commonly considered a good thing though, because the engine has higher-level knowledge of its use cases (e.g. which textures are critically important and which can be unloaded when GPU memory is full), so it can get better performance by doing it properly. All this is hidden in the engines anyway, so developers making their games don’t notice the difference.

Those of you, who – just like me – deal with those low-level graphics APIs in their everyday work, may wonder if these APIs provide the right level of abstraction. I know it will sound controversial, but sometimes I get a feeling they are at the exactly worst possible level – so low they are difficult to learn and use properly, while so high they still hide some implementation details important for getting a good performance. Let’s take image/texture barriers as an example. They were non-existent in previous APIs. Now we have to do them, which is a major pain point when porting old code to a new API. Do too few of them and you get graphical corruptions on some GPUs and not on the others. Do too many and your performance can be worse than it has been on DX11 or OGL. At the same time, they are an abstract concept that still hides multiple things happening under the hood. You can never be sure which barrier will flush some caches, stall the whole graphics pipeline, or convert your texture between internal compression formats on a specific GPU, unless you use some specialized, vendor-specific profiling tool, like Radeon GPU Profiler (RGP).

It’s the same with memory. In DX11 you could just specify intended resource usage (D3D11_USAGE_IMMUTABLE, D3D11_USAGE_DYNAMIC) and the driver chose preferred place for it. In Vulkan you have to query for memory heaps available on the current GPU and explicitly choose the one you decide best for your resource, based on low-level flags like VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT etc. AMD exposes 4 memory types and 3 memory heaps. Nvidia has 11 types and 2 heaps. Intel integrated graphics exposes just 1 heap and 2 types, showing the memory is really unified, while AMD APU, also integrated, has same memory model as the discrete card. If you try to match these to what you know about physically existing video RAM and system RAM, it doesn’t make any sense. You could just pick the first DEVICE_LOCAL memory for the fastest GPU access, but even then, you cannot be sure your resource will stay in video RAM. It may be silently migrated to system RAM without your knowledge and consent (e.g. if you go out of memory), which will degrade performance. What is more, there is no way to query for the amount of free GPU memory in Vulkan, unless you do hacks like using DXGI.

Hardware queues are no better. Vulkan claims to give explicit access to the pieces of GPU hardware, so you need to query for queues that are available. For example, Intel exposes only a single graphics queue. AMD lets you create up to 3 additional compute-only queues and 2 transfer queues. Nvidia has 8 compute queues and 1 transfer queue. Do they all really map to silicon that can work in parallel? I doubt it. So how many of them to use to get the best performance? There is no way to tell by just using Vulkan API. AMD promotes doing compute work in parallel with 3D rendering while Nvidia diplomatically advises to be “conscious” with it.

It's the same with presentation modes. You have to enumerate VkPresentModeKHR-s available on the machine and choose the right one, along with number of images in the swapchain. These don't map intuitively to a typical user-facing setting of V-sync = on/off, as they are intended to be low level. Still you have no control and no way to check whether the driver does "blit" or "flip".

One could say the new APIs don’t deliver to their promise of being low level, explicit, and having predictable performance. It is impossible to deliver, unless the API is specific to one GPU, like there is on consoles. A common API over different GPUs is always high level, things happen under the hood, and there are still fast and slow paths. Isn’t all this complexity just for nothing? It may be true that comparing to previous generation APIs, drivers for the new ones need not launch additional threads in the background or perform shader compilation on first draw call, which greatly reduces chances of major hitching. (We will see how long this state will persist as the APIs and drivers evolve.) Still there is no way to predict or ensure minimum FPS/maximum frame time. We are talking about systems where multiple processes compete for resources. On modern PCs there is even no way to know how many cycles will a single instruction take! Cache memory, branch prediction, out-of-order execution – all of these mechanisms are there in the CPU to speed up average cases, but there can always be cases when it works slowly (e.g. cache miss). It’s the same with graphics. I think we should abandon the false hope of predictable performance as a thing of the past, just like rendering graphics pixel-perfect. We can optimize for the average, but we cannot ensure the minimum. After all, games are “soft real-time systems”.

Based on that, I am thinking if there is a room for a new graphics API or top of DX12 or Vulkan. I don’t mean whole game engine with physical simulation, handling sound, input controllers and all, like Unity or UE4. I mean an API just like DX11 or OGL, on a similar or higher abstraction level (if higher level, maybe the concept of persistent “frame graph” with explicit pass and resource dependencies is the way to go?). I also don’t think it’s enough to just reimplement any of those old APIs. The new one should take advantage of features of the explicit APIs (like parallel command buffer recording), while hiding the difficult parts (e.g. queues, memory types, descriptors, barriers), so it’s easier to use and harder to misuse. (An existing library similar to this concept is V-EZ from AMD.) I think it may still have good performance. The key thing needed for creation of such library is abandoning the assumption that developer must define everything up-front, with nothing allocated, created, or transferred on first use.

See also next post: "How to design API of a library for Vulkan?"

Update 2019-02-12: I want to thank all of you for the amazing feedback I received after publishing this post, especially on Twitter. Many projects have been mentioned that try to provide an API better than Vulkan or DX12 - e.g. Apple Metal, WebGPU, The Forge by Confetti.

Comments | #gpu #optimization #graphics #libraries #directx #vulkan Share

# Global Game Jam 2019 - my impressions

Thu
31
Jan 2019

Last weekend the 2019 edition of Global Game Jam took place - a worldwide event where teams od developers gather in different sites all around the world to make games during two days and two nights. There was a large site in my city (Warsaw) - PolyJam, but I decided to go to Gdańsk to participate it their local site called Hackerspace Game Jam together with my friends.

Theme this year was "what home means to you". As always, participants interpreted it very differently. Those who have families associated home with all kinds of troubles caused by the other residents. Pooplers - the game I liked the most - is about babies crawling around the house and pooping competitively to cover as much surface as possible with their specific color, while avoiding the mother :) Home Alone: Cat edition is about a cat that can destroy and drop stuff from the shelves, all in first person perspective. Kapeć Defender is about a man who throws slipper (pol. "kapeć") at the wife and other people to be able to just sit and watch TV. There were more sci-fi settings as well. I liked the game Gwiezdni Somsiedzi a lot. It is the only one with multiplayer over network. Players have to control satellites flying in space, catch asteroids and throw them at the other players. Another space game was Glop where players have to cooperatively control various devices on the surface of a planet to make it fly, as well as shoot at incoming obstacles.

When it comes to technology, most teams used Unity engine. Some used JavaScript with some game framework. There was just 1 VR game. Many games included multiplayer on a single computer using gamepads, one included networked multiplayer.

Our team was a group of friends from the demoscene - 2 ex-Intel C++ developers and 2 DevOps currently working in a bank. Unfortunately we had no graphics artists. Although I would prefer to use Unity or Unreal Engine these days, we eventually decided to go the hard way and code in C++ using dxfw - the old framework developed by Krzysiek K., based on Direct3D 9. I had to remind myself this old technology before the jam, including all these D3DRS_ fixed-function pipeline states and D3DX math library. By the way: If the last version of DirectX SDK for DX9 was released in June 2010, can we already consider it a retro platform, along with Atari and Amiga? ;)

We used FMOD library for playing sound and music and Gainput for handling input from gamepads. We started from having a ray-traced sphere, so we had to code all the game logic and rendering from scratch, including displaying characters, UI, collisions, etc. We've developed some of the logic in C++ and some in Squirrel, because we had this scripting language already integrated with the framework. I had no previous experience with Squirrel, so I had to learn it very quickly. After going through the documentation, I concluded that I love it! It looks like a great scripting language for simple applications. It's not perfect, e.g. it lacks vector and matrix types so necessary in game development (just like pretty much every other programming language except HLSL/GLSL), but I like its simplicity and syntax. It is very similar to Lua in its overall philosophy - dynamically typed, object oriented, and based on key-value arrays. The syntax is not that weird though. It seems to follow the "principle of least astonishment" - it's very similar to C++, arrays are indexed from 0, plus ending statements with a semicolon is optional - end of line also works.

Participating in an event such as GGJ is always an adventure and an opportunity for many new experiences - much better than just sitting on the Internet at home. During this jam I not only learned Squirrel as a new programming language, but I've also heard what is it like to work as a programmer at a bank, I've registered on Asana (a web service for organizing TODO lists, just like Trello which I used before), and of course I had an opportunity to practice quick and dirty programming, as opposed to code carefully thought out and tested, like it has to be done in a regular job.

Finally, the game we've made is here: LazerBugz. It is a twin-stick shooter happening on a spherical surface of a planet. The "home" is the cosmic base that you have to defend while shooting at alien bugs and going out to gather randomly placed gems. It supports local co-op for any number of players using Xbox gamepads or keyboard and mouse. Some screenshots and a photo of people playing our game:

There was a competition on our site. We didn't take any of the first 3 places. We just got mention among the games who received a good number of votes. The game that won was Clash of T-Rexes - kind of Pong with two dinosaurs standing on two planets.

Official photo gallery from the event: Hackerspace Game Jam 2019

Comments | #productions #competitions #ggj #events Share

# Why I think it is worth studying

Fri
21
Dec 2018

In 2018, it's been 10 years since I graduated Czestochowa University of Technology, becoming M.S. Computer Science. I don't regret it. There is endless debate among programmers about whether it is worth studying at some university or is it better to start professional career as early as possible and become fully self-taught specialist. Below is my personal opinion about this. (A little disclaimer: my views may be biased because my father is a professor of electrical engineering, so I had lots of contact with the scientific world since my childhood.)

One concern that arises often is the significance of having a university diploma for potential employers. From my experience, I would say that while formal education is not necessary for getting a job as a programmer, it may matter for some companies. It's important especially at the beginning of the career, when there is not much in your CV, because after many years of professional work the education takes second place after work experience. Smaller companies may favor skills and experience in specific technologies and don't require a degree too often, because they want to find an employee who can just start doing his tasks right away. Big corporations may prefer candidates with a degree because they look for talented people who can learn required skills after they are hired, plus they often have extensive HR department with people who are nontechnical and filter incoming CVs just based on keywords and simple criteria, like being a university graduate.

Another frequent question is whether the knowledge taught at the university is practical. Surely there are many courses teaching specific topics that you may never find useful in your career. Surely there is also much you will need to learn in each workplace that was not covered during the studies. It's true that software technologies evolve fast, so what you learn now may become irrelevant in few years (although not always - e.g. C has more than 40 years and it's still the 2nd most important programming language). But I think that studying gives more than just practical knowledge useful for future job, or a diploma.

1. It gives you theoretical basis. While you can learn a new programming language or a framework from a book or some materials you find on the Internet, it's not that easy with more theoretical subjects like algebra, calculus, physics, algorithms and data structures, electronics, digital signal processing etc. Would you spend long time learning these on your own, rather than focusing on something more practical? I bet most people wouldn't, while having some theoretical foundation is essential when you need to understand some practical subjects, e.g. how velocity is a derivative and acceleration is second derivative of displacement with respect to time, when programming a car simulation game.

2. It teaches you a bit of everything. For example, if your professional interests involve systems programming and C, or data science and Python, you probably prefer to deepen your knowledge in this area rather than learning about something that is less interesting to you. At the university you have to get at least some basics of various subjects, whether it's web development, databases, computer networks and security, computer hardware, operating systems and shell scripts etc. Each of these subjects may turn out to be unexpectedly useful someday, and the basic knowledge will be a good starting point for learning more about it when necessary.

3. It teaches you, and proves to others, that you are able to carry out and finish a large project. That's an important skill on its own. There are many visionary-type people with their heads full of new, high-level ideas, who start something new every day, but never finish anything, because they find it boring or too difficult to do the actual work. There are also those over-thinkers who theorize and plan everything in smallest details, but never even start doing. These are not effective employees. At university, every course is like a project. Passing exams of every semester is like a project with strict deadline. Finally, writing thesis and graduating is also like one, large project. It may be your first such big and successful accomplishment and a good start for challenges that await in your professional work.

Comments | Share

# Vulkan sparse binding - a quick overview

Thu
13
Dec 2018

On 13 December 2018 AMD released new version of graphics driver: 18.12.2. One could say there is nothing special about it – new drivers are released as often as every 2 weeks. But this version brings a change very important for Vulkan developers. AMD now catches up with competition – Nvidia and Intel – in supporting Vulkan sparse binding and sparse residency, which makes it usable in Windows PC games. I’d like to make it an opportunity to briefly describe what is it and how can you start using it, assuming you are a programmer who already knows a bit about C or C++ and Vulkan.

Read full entry > | Comments | #graphics #vulkan Share

# "Co działa szybko, a co wolno w grafice 3D?" - talk at Collegium da Vinci

Tue
20
Nov 2018

(EN) This post will be in Polish because it's an invitation for my talk, which will happen 7th December 2018 in Collegium da Vinci in Poznań, and it will be in Polish.

(PL) Wszystkich zainteresowanych tworzeniem gier (także w Unity czy Unreal Engine, niekoniecznie zaawansowanym programowaniem grafiki w C++) mam przyjemność zaprosić na mój wykład pt. „Co działa szybko, a co wolno w grafice 3D?”, który odbędzie się 7 grudnia 2018 na uczelni Collegium Da Vinci w Poznaniu, w ramach cyklu spotkań „INTERAKCJE”.

Opis: Grafika 3D jest istotną częścią współczesnych gier video, a jej wydajne renderowanie jest niezbędne do płynnego działania gier w czasie rzeczywistym. Znajomość podstaw tej dziedziny jest przydatna niezależnie od wybranej technologii (np. Unity, Unreal Engine czy własny silnik pisany w C++). Wykład stanowi przegląd technik stosowanych w grafice renderowanej z użyciem współczesnych GPU z podkreśleniem, które z nich mogą stanowić problem wydajnościowy oraz jakimi sposobami można uzyskać lepszą wydajność.

Comments | #teaching #graphics #events Share

# Vulkan with DXGI - experiment results

Mon
19
Nov 2018

In my previous post, I’ve described a way to get GPU memory usage in Windows Vulkan app by using DXGI. This API, designed for Direct3D, seems to work with Vulkan as well. In this post I would like to share detailed results of my experiment on two different platforms with two graphics cards from different vendors. But before that, a disclaimer:

  • All the data presented here have been obtained by just writing an app using Vulkan and DXGI API, treating graphics card and its driver as a black box. It can be reproduced by any Windows developer.
  • None of this is officially documented anywhere, required by any specification or expected to work. It could not work for you or stop working at any time. If you use this method, you do it at your own risk.
  • Specific numbers presented here are anecdotal evidence rather than a proof of any general rules. They may significantly change depending on specific parameters used in the test, as well depending on testing environment, including but not limited to: graphics card vendor and model, version of graphics driver, Windows, Windows SDK, and Vulkan SDK.

Read full entry > | Comments | #windows #graphics #directx #vulkan Share

# There is a way to query GPU memory usage in Vulkan - use DXGI

Thu
15
Nov 2018

In my GDC 2018 talk “Memory management in Vulkan and DX12” (slides freely available, video behind GDC Vault paywall) I said that in Direct3D 12 you can query for the exact amount of GPU memory used and available, while in Vulkan there is no way to do that, so I recommend to just query for memory capacity (VkMemoryHeap::​size) and limit your usage to around 80% of it. It turns out that I wasn’t quite right. If you code for Windows, there is a way to do this. I assumed that the mentioned function IDXGIAdapter3::​QueryVideoMemoryInfo is part of Direct3D 12 interface, while it is actually part of DirectX Graphics Infrastructure (DXGI). This is a more generic, higher level Windows API that allows you to enumerate adapters (graphics cards) installed in the system, query for their parameters and outputs (monitors) connected to them. Direct3D refers to this API, but it’s not the same.

Key question is: Can you use DXGI to query for GPU memory usage while doing graphics using Vulkan, not D3D11 or D3D12? Would it return some reasonable data and not all zeros? Short answer is: YES! I’ve made an experiment - I wrote a simple app that creates various Vulkan objects and queries DXGI for memory usage. Results look very promising. But before I move on to the details, here is a short primer of how to use this DXGI interface, for all non-DirectX developers:

1. Use C++ in Visual Studio. You may also use some other compiler for Windows or other programming language, but it will be probably harder to setup.

2. Install relatively new Windows SDK.

3. #include <dxgi1_4.h> and <atlbase.h>

4. Link with “dxgi.lib”.

5. Create Factory object:

IDXGIFactory4* dxgiFactory = nullptr;
CreateDXGIFactory1(IID_PPV_ARGS(&dxgiFactory));

Don’t forget to release it at the end:

dxgiFactory->Release();

6. Write a loop to enumerate available adapters. Choose and remember suitable one.

IDXGIAdapter3* dxgiAdapter = nullptr;
IDXGIAdapter1* tmpDxgiAdapter = nullptr;
UINT adapterIndex = 0;
while(m_DxgiFactory->EnumAdapters1(adapterIndex, &tmpDxgiAdapter) != DXGI_ERROR_NOT_FOUND)
{
    DXGI_ADAPTER_DESC1 desc;
    tmpDxgiAdapter>GetDesc1(&desc);
    if(!dxgiAdapter && desc.Flags == 0)
    {
        tmpDxgiAdapter->QueryInterface(IID_PPV_ARGS(&dxgiAdapter));
    }
    tmpDxgiAdapter->Release();
    ++adapterIndex;
}

At the end, don’t forget to release it:

dxgiAdapter->Release();

Please note that using new version of DXGI interfaces like DXGIFactory4 and DXGIAdapter3 requires some relatively new version (I’m not sure which one) of both Windows SDK on developer’s side (otherwise it won’t compile) and updated Windows system on user’s side (otherwise function calls with fail with appropriate returned HRESULT).

7. To query for GPU memory usage at the moment, use this code:

DXGI_QUERY_VIDEO_MEMORY_INFO info = {};
dxgiAdapter->QueryVideoMemoryInfo(0, DXGI_MEMORY_SEGMENT_GROUP_LOCAL, &info);

There are two possible options:

  • DXGI_MEMORY_SEGMENT_GROUP_LOCAL is the memory local to the GPU, so basically video RAM.
  • DXGI_MEMORY_SEGMENT_GROUP_NON_LOCAL is the system RAM.

Among members of the returned structure, the most interesting is CurrentUsage. It seems to precisely reflect the use of GPU memory - it increases when I allocate a new VkDeviceMemory object, as well as when I use some implicit memory by creating other Vulkan resources, like a swap chain, descriptor pools and descriptor sets, command pools and command buffers, query pools etc.

Other DXGI features for video memory - callback for budget change notification (IDXGIAdapter3::​RegisterVideoMemoryBudgetChangeNotificationEvent) and reservation (IDXGIAdapter3::​SetVideoMemoryReservation) may also work with Vulkan, but I didn’t check them.

As an example, on my system with GPU = AMD Radeon RX 580 8 GB and 16 GB of system RAM, on program startup and before any Vulkan or D3D initialization, DXGI reports following data:

Local:
 Budget=7252479180 CurrentUsage=0
 AvailableForReservation=3839547801 CurrentReservation=0
Nonlocal:
 Budget=7699177267 CurrentUsage=0
 AvailableForReservation=4063454668 CurrentReservation=0

8. You may want to choose correct DXGI adapter to match the physical device used in Vulkan. Even on the system with just one discrete GPU there are two adapters reported, one of them being software renderer. I exclude it by comparing desc.Flags == 0, which means this is a real, hardware-accelerated GPU, not DXGI_ADAPTER_FLAG_REMOTE or DXGI_ADAPTER_FLAG_SOFTWARE.

Good news is that even when there are more such adapters in the system, there is a way to match them between DXGI and Vulkan. Both APIs return something called Locally Unique Identifier (LUID). In DXGI it’s in DXGI_ADAPTER_DESC1::​AdapterLuid. In Vulkan it’s in VkPhysicalDeviceIDProperties::​deviceLUID. They are of different types - two 32-bit numbers versus array of bytes, but it seems that simple, raw memory compare works correctly. So the way to find DXGI adapter matching Vulkan physical device is:

// After obtaining VkPhysicalDevice of your choice:
VkPhysicalDeviceIDProperties physDeviceIDProps = {
    VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_ID_PROPERTIES };
VkPhysicalDeviceProperties2 physDeviceProps = {
    VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2 };
physDeviceProps.pNext = &physDeviceIDProps;
vkGetPhysicalDeviceProperties2(physicalDevice, &physDeviceProps);

// While enumerating DXGI adapters, replace condition:
// if(!dxgiAdapter && desc.Flags == 0)
// With this:
if(memcmp(&desc.AdapterLuid, physDeviceIDProps.deviceLUID, VK_LUID_SIZE) == 0)

Please note that function vkGetPhysicalDeviceProperties2 requires Vulkan 1.1, so set VkApplicationInfo::​apiVersion = VK_API_VERSION_1_1. Otherwise the call results in “Access Violation” error.

In my next blog post, I present detailed results of my experiment with DXGI used with Vulkan application, tested on 2 different GPUs.

Comments | #windows #graphics #vulkan #directx Share

Older entries >

Twitter

Pinboard Bookmarks

LinkedIn

Blog Tags

STAT NO AD
[Stat] [STAT NO AD] [Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2019