问题描述
在Vulkan中使用多GPU是否会像创建多个命令队列,然后在它们之间划分命令缓冲区一样?
Would using multi-GPUs in Vulkan be something like making many command queues then dividing command buffers between them?
有两个问题:
- 在OpenGL中,我们使用GLEW来获取函数.拥有超过1个GPU,每个GPU都有自己的驱动程序.我们如何使用Vulkan?
- 框架的一部分将由GPU&生成吗?其他带有其他GPU的处理器则使用Intel GPU渲染UI&例如,AMD还是Nvidia GPU可以在Labtop中渲染游戏屏幕?还是在GPU& amp;中生成帧?另一个GPU中的下一帧?
推荐答案
由于存在Vulkan,因此已更新了最新信息.
Updated with more recent information, now that Vulkan exists.
多GPU设置有两种:在某些SLI风格的设置中,多个GPU是一部分;而在非SLI风格的设置中,则不是. Vulkan支持两者,并且在同一台计算机上同时支持它们.也就是说,您可以拥有两个SLI结合在一起的NVIDIA GPU,并且拥有Intel嵌入式GPU,而Vulkan可以与它们全部交互.
There are two kinds of multi-GPU setups: where multiple GPUs are part of some SLI-style setup, and the kind where they are not. Vulkan supports both, and supports them both in the same computer. That is, you can have two NVIDIA GPUs that are SLI-ed together, and the Intel embedded GPU, and Vulkan can interact with them all.
在Vulkan中,有一个称为Vulkan实例的东西.这代表基本的Vulkan系统本身.各个设备将自己注册到实例. Vulkan实例系统实质上是由Vulkan SDK实现的.
In Vulkan, there is something called the Vulkan instance. This represents the base Vulkan system itself; individual devices register themselves to the instance. The Vulkan instance system is, essentially, implemented by the Vulkan SDK.
物理设备代表实现到GPU的接口的特定硬件.公开Vulkan实现的每个硬件都是通过在实例系统中注册其物理设备来实现的.您可以查询哪些物理设备可用,以及有关这些物理设备的一些基本属性(它们的名称,它们提供的内存大小等).
Physical devices represent a specific piece of hardware that implements the interface to a GPU. Each piece of hardware that exposes a Vulkan implementation does so by registering its physical device with the instance system. You can query which physical devices are available, as well as some basic properties about them (their names, how much memory they offer, etc).
然后,为您使用的物理设备创建逻辑设备.逻辑设备是您在Vulkan中实际执行操作的方式.它们具有队列,命令缓冲区等.每个逻辑设备都是独立的...主要是
You then create logical devices for the physical devices you use. Logical devices are how you actually do stuff in Vulkan. They have queues, command buffers, etc. And each logical device is separate... mostly.
现在,您可以绕过整个实例",并手动加载设备.但是你真的不应该.至少,除非您还处于开发的最后阶段,否则不要这样.对于日常调试而言,Vulkan层太关键了,因此只能选择退出.
Now, you can bypass the whole "instance" thing and load devices manually. But you really shouldn't. At least, not unless you're at the end of development. Vulkan layers are far too critical for day-to-day debugging to just opt out of that.
Vulkan 1.1中有一些核心机制,这些机制允许单个设备能够将某些信息传达给其他设备.在1.1版中,只能在物理设备之间共享某些类型的信息(即,篱笆和信号灯,甚至在Linux上也只能通过同步文件)共享.尽管这些API可以提供一种在两个物理设备之间共享数据的机制,但是目前,对大多数形式的数据共享的限制是两个物理设备必须具有匹配的UUID(因此它们是同一物理设备).
There are mechanisms, core in Vulkan 1.1, that allow individual devices to be able to communicate some information to other devices. In 1.1, only certain kinds of information can be shared across physical devices (namely, fences and semaphores, and even then, only on Linux through sync files). While these APIs could provide a mechanism for sharing data between two physical devices, at present, the restriction on most forms of data sharing is that both physical devices must have matching UUIDs (and therefore are the same physical device).
使用SLI进行处理的过程包括两个Vulkan 1.0扩展:KHR_device_group
和KHR_device_group_creation
.前者用于处理Vulkan中的设备组",而后者是用于创建设备分组的设备的实例扩展.这两个都是Vulkan 1.1的核心.
Dealing with SLI is covered by two Vulkan 1.0 extensions: KHR_device_group
and KHR_device_group_creation
. The former is for dealing with "device groups" in Vulkan, while the latter is an instance extension for creating device-grouped devices. Both of these are core in Vulkan 1.1.
这样的想法是SLI聚合作为单个VkDevice
公开,它是由多个VkPhysicalDevice
创建的.每个内部物理设备都是一个子设备".您可以查询子设备及其相关属性.内存分配特定于特定的子设备.资源对象(缓冲区和图像)不是特定于子设备的,而是可以与不同子设备上的不同内存分配相关联的.
The idea with this is that the SLI aggregation is exposed as a single VkDevice
, which is created from a number of VkPhysicalDevice
s. Each internal physical device is a "sub-device". You can query sub-devices and some properties about them. Memory allocations are specific to a particular sub-device. Resource objects (buffers and images) are not specific to a sub-device, but they can be associated with different memory allocations on the different sub-devices.
命令缓冲区和队列不特定于子设备;当您在队列上执行CB时,驱动程序会确定它将在哪个子设备上运行,并使用适当的GPU指针填充那些使用图像/缓冲区的描述符,并为这些图像/缓冲区具有的内存分配适当的GPU指针.绑定到那些特定的子设备上.
Command buffers and queues are not specific to sub-devices; when you execute a CB on a queue, the driver figures out which sub-device(s) it will run on, and fills in the descriptors that use the images/buffers with the proper GPU pointers for the memory that those images/buffers have been bound to on those particular sub-devices.
替代帧渲染只是在一个帧上呈现从一个子设备生成的图像,然后在另一帧上呈现从另一个子设备生成的图像.拆分帧渲染由更复杂的机制处理,您可以在其中定义要在设备之间拆分的渲染命令的目标图像的内存.您甚至可以使用可呈现的图像来做到这一点.
Alternate-frame rendering is simply presenting images generated from one sub-device on one frame, then presenting images from a different sub-device on another frame. Split-frame rendering is handled by a more complex mechanism, where you define the memory for the destination image of a rendering command to be split among devices. You can even do this with presentable images.
这篇关于多GPU编程如何与Vulkan一起使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!