Home ยป Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Intel has released the open-source Intel NPU Acceleration Library, a Python library for utilizing the Neural Processing Unit (NPU) in their latest CPU models starting from Core Ultra (Meteor Lake) onwards.

In the year 2024, we can expect to hear more discussions about NPU chips as both Intel and AMD have started incorporating NPUs into their CPUs, in line with Microsoft’s push for AI PCs in various markets.

Intel’s NPU library will support fundamental AI processing features such as computations using Float16 format (with BFloat16 coming in the future), 8-bit quantization, performance tuning for PyTorch (torch.compile), and future features allowing NPU processing in conjunction with GPUs or mixed precision processing using various levels of floating points.

Currently, this library is available for use on Windows and Linux (specifically Ubuntu).

Source: Intel GitHub, Phoronix

TLDR: Intel introduces the Intel NPU Acceleration Library in Python for NPU usage in their new CPUs, supporting AI processing features and compatibility with PyTorch.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Unveiling the Cerebras WSE-3: An Exceptional AI Processing Chip with 900,000 Cores, Larger than 57 Times a GPU.

Fastest Llama 2 70B showcases impressive LLM testing results at 240 token/s with Groq service

.NET 9 Preview 4 Introduces Tensor Variable to Boost AI Processing Speeds