Home ยป Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Intel has released the open-source Intel NPU Acceleration Library, a Python library for utilizing the Neural Processing Unit (NPU) in their latest CPU models starting from Core Ultra (Meteor Lake) onwards.

In the year 2024, we can expect to hear more discussions about NPU chips as both Intel and AMD have started incorporating NPUs into their CPUs, in line with Microsoft’s push for AI PCs in various markets.

Intel’s NPU library will support fundamental AI processing features such as computations using Float16 format (with BFloat16 coming in the future), 8-bit quantization, performance tuning for PyTorch (torch.compile), and future features allowing NPU processing in conjunction with GPUs or mixed precision processing using various levels of floating points.

Currently, this library is available for use on Windows and Linux (specifically Ubuntu).

Source: Intel GitHub, Phoronix

TLDR: Intel introduces the Intel NPU Acceleration Library in Python for NPU usage in their new CPUs, supporting AI processing features and compatibility with PyTorch.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Introducing the Cutting-Edge Qualcomm Snapdragon 8s Gen 3 Chipset: Empowering On Device AI Capabilities

Intel’s CEO concedes: AI chip Gaudi falls short of sales target, achieving less than $500 million this year.

Elevate Your Computing Power with AMD EPYC Gen 5 Server CPU, Offering up to 192 Cores and External GPU Compatibility for AI Hosting.