How to Use GPU Computing in Rust (wgpu, CUDA bindings)

Use the wgpu crate with the gpu-kernel ABI to initialize a device and launch GPU kernels in Rust.

Use the wgpu crate for portable GPU rendering or cuda-sys for direct CUDA bindings, defining kernel entry points with the extern "gpu-kernel" or extern "ptx-kernel" ABI. Add the crate to your Cargo.toml, initialize a device and queue, and launch your kernel function.

use wgpu::{Device, Queue, Buffer, BufferUsages};

#[no_mangle]
#[link_section = ".text"]
extern "gpu-kernel"
fn my_kernel() {
    // GPU logic here
}

fn main() {
    let instance = wgpu::Instance::new(wgpu::InstanceDescriptor::default());
    let adapter = futures::executor::block_on(instance.request_adapter(&wgpu::RequestAdapterOptions::default())).unwrap();
    let (device, queue) = futures::executor::block_on(adapter.request_device(&wgpu::DeviceDescriptor::default(), None)).unwrap();
    let buffer = device.create_buffer(&wgpu::BufferDescriptor {
        label: Some("GPU Buffer"),
        size: 1024,
        usage: BufferUsages::STORAGE | BufferUsages::COPY_DST,
        mapped_at_creation: false,
    });
    queue.write_buffer(&buffer, 0, &[0; 1024]);
}
  1. Add wgpu = "0.20.1" to your Cargo.toml dependencies.
  2. Initialize the wgpu::Instance and request an Adapter using request_adapter.
  3. Request a Device and Queue from the adapter using request_device.
  4. Define your GPU kernel function with the extern "gpu-kernel" ABI and #[no_mangle] attribute.
  5. Create a Buffer with BufferUsages::STORAGE to hold data for the kernel.
  6. Submit work to the GPU using queue.write_buffer or queue.submit.