Back to GPU Hub

05 / Learning Path

CUDA Programming

Kernel design, memory access, and runtime optimization.

Main Sections

Sub Topics

Topic 01

01

CUDA Keywords and Memory Qualifiers

Theory

CUDA extends C++ with qualifiers that define execution location and memory placement.

These qualifiers tell the compiler whether code runs on CPU or GPU, and where variables are stored.

Execution space qualifiers

QualifierMeaning
__global__Kernel function called on CPU and executed on GPU
__device__Function called and executed on GPU
__host__Standard CPU-only C++ function

Memory qualifiers

QualifierMemory behavior
__shared__Places variables in per-block shared memory on SM
__constant__Places read-only values in constant memory cache