Home ยป Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Intel has released the open-source Intel NPU Acceleration Library, a Python library for utilizing the Neural Processing Unit (NPU) in their latest CPU models starting from Core Ultra (Meteor Lake) onwards.

In the year 2024, we can expect to hear more discussions about NPU chips as both Intel and AMD have started incorporating NPUs into their CPUs, in line with Microsoft’s push for AI PCs in various markets.

Intel’s NPU library will support fundamental AI processing features such as computations using Float16 format (with BFloat16 coming in the future), 8-bit quantization, performance tuning for PyTorch (torch.compile), and future features allowing NPU processing in conjunction with GPUs or mixed precision processing using various levels of floating points.

Currently, this library is available for use on Windows and Linux (specifically Ubuntu).

Source: Intel GitHub, Phoronix

TLDR: Intel introduces the Intel NPU Acceleration Library in Python for NPU usage in their new CPUs, supporting AI processing features and compatibility with PyTorch.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Initiating Edge Computing Processing for AI Readiness with IBM Power S1012

Unverified: Dell Closes $5 Billion Deal to Sell New Server Lot with GB200 Chipset for xAI

Rumors Swirl as Meta Plans to Invest in Building Data Center AI with an Additional $200 Billion – Company Denies Reports