oneDNN is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel Architecture Processors and Intel Processor Graphics. oneDNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs.
Compute intensive operations:
Memory bandwidth limited operations:
Data manipulation:
Topic | Engine | C++ API | C API |
---|---|---|---|
Tutorials | CPU/GPU | getting_started_cpp | |
CPU/GPU | memory_format_propagation_cpp | ||
CPU/GPU | performance_profiling_cpp | ||
CPU/GPU | cross_engine_reorder_cpp | cross_engine_reorder_c | |
GPU | gpu_opencl_interop_cpp | ||
f32 inference | CPU/GPU | cnn_inference_f32_cpp | cnn_inference_f32_c |
CPU | cpu_rnn_inference_f32_cpp | ||
int8 inference | CPU/GPU | cnn_inference_int8_cpp | |
CPU | cpu_rnn_inference_int8_cpp | ||
f32 training | CPU/GPU | cnn_training_f32_cpp | |
CPU | cpu_cnn_training_f32_c | ||
CPU/GPU | rnn_training_f32_cpp | ||
bf16 training | CPU/GPU | cnn_training_bf16_cpp |