Home ยป Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Accelerating AI Processing in Intel Open Source Library with NPU on Core Ultra Chip

Intel has released the open-source Intel NPU Acceleration Library, a Python library for utilizing the Neural Processing Unit (NPU) in their latest CPU models starting from Core Ultra (Meteor Lake) onwards.

In the year 2024, we can expect to hear more discussions about NPU chips as both Intel and AMD have started incorporating NPUs into their CPUs, in line with Microsoft’s push for AI PCs in various markets.

Intel’s NPU library will support fundamental AI processing features such as computations using Float16 format (with BFloat16 coming in the future), 8-bit quantization, performance tuning for PyTorch (torch.compile), and future features allowing NPU processing in conjunction with GPUs or mixed precision processing using various levels of floating points.

Currently, this library is available for use on Windows and Linux (specifically Ubuntu).

Source: Intel GitHub, Phoronix

TLDR: Intel introduces the Intel NPU Acceleration Library in Python for NPU usage in their new CPUs, supporting AI processing features and compatibility with PyTorch.

More Reading

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Unveiling NVIDIA’s RTX 500 and 1000 Ada Generation GPUs for Cutting-edge Generative AI Tasks for Laptops.

Advanced Chip Company Groq Discontinues Sales of LLM Chips, Transitioning to Cloud-Exclusive Offering

Unveiling the Untapped Potential: PCWorld Uncovers the Insufficient Magnum of Amplifying AI Processing via NPU Chips in Personal Computers