Home ยป Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Intel has released the open-source Intel NPU Acceleration Library, a Python library for utilizing the Neural Processing Unit (NPU) in their latest CPU models starting from Core Ultra (Meteor Lake) onwards.

In the year 2024, we can expect to hear more discussions about NPU chips as both Intel and AMD have started incorporating NPUs into their CPUs, in line with Microsoft’s push for AI PCs in various markets.

Intel’s NPU library will support fundamental AI processing features such as computations using Float16 format (with BFloat16 coming in the future), 8-bit quantization, performance tuning for PyTorch (torch.compile), and future features allowing NPU processing in conjunction with GPUs or mixed precision processing using various levels of floating points.

Currently, this library is available for use on Windows and Linux (specifically Ubuntu).

Source: Intel GitHub, Phoronix

TLDR: Intel introduces the Intel NPU Acceleration Library in Python for NPU usage in their new CPUs, supporting AI processing features and compatibility with PyTorch.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Introducing Positron IDE: The Innovative Development Tool by Posit, the RStudio Developer Company, Launched on VS Code.

Enhanced Version: Django Unveils Version 5.0 Catering to Effortless Form Creation while Incorporating Calculated Fields within the Database Framework.

The Language Creators of Java, Python, C#/TypeScript, and Smalltalk Join Forces for a Charity Fundraiser