Back to GPU Hub
05 / Learning Path
CUDA Programming
Kernel design, memory access, and runtime optimization.
Main Sections
Sub Topics
Topic 01
01
CUDA Keywords and Memory Qualifiers
Theory
CUDA extends C++ with qualifiers that define execution location and memory placement.
These qualifiers tell the compiler whether code runs on CPU or GPU, and where variables are stored.
Execution space qualifiers
| Qualifier | Meaning |
|---|---|
| __global__ | Kernel function called on CPU and executed on GPU |
| __device__ | Function called and executed on GPU |
| __host__ | Standard CPU-only C++ function |
Memory qualifiers
| Qualifier | Memory behavior |
|---|---|
| __shared__ | Places variables in per-block shared memory on SM |
| __constant__ | Places read-only values in constant memory cache |