OSTA

ASBIS tarnib laias valikus IT tooteid oma klientidele Eestis. Külasta Asbise edasimüüjate loetelu, et leida kõige lähemal asuv IT pood

Uudised

september 02, 2024
Best place to put a space heater – How to use a heater in the room
august 14, 2024
Introducing the World to its All-Round FastCharge 2.0 and HyperSpeed Lab for ...
august 08, 2024
AMD
AMD Ryzen™ 9000 Series processors
juuli 30, 2024
ASBIS, as distributor of Ubiquiti, a leading provider of networking technology, ...
juuli 25, 2024
Meet the new pack of opportunities to earn more Intel points when purchasing ...
juuni 25, 2024
ASBISC Enterprises Plc, a leading Value-Added Distributor, developer, and ...
Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

mai 27, 2024

Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

As part of its mission to bring AI everywhere, Intel continuously invests in the AI software ecosystem by collaborating with AI leaders and innovators.

Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft’s Phi-3 family of open models. The Phi-3 family of small, open models can run on lower-compute hardware, be more easily fine-tuned to meet specific requirements and enable developers to build applications that run locally. Intel’s supported products include Intel® Gaudi® AI accelerators and Intel® Xeon® processors for data center applications and Intel® Core™ Ultra processors and Intel® Arc™ graphics for client.

As part of its mission to bring AI everywhere, Intel continuously invests in the AI software ecosystem by collaborating with AI leaders and innovators.

Intel worked with Microsoft to enable Phi-3 model support for its central processing units (CPUs), graphics processing units (GPUs) and Intel Gaudi accelerators on launch day. Intel also co-designed the accelerator abstraction in DeepSpeed, which is an easy-to-use deep learning optimization software suite, and extended the automatic tensor parallelism support for Phi-3 and other models on Hugging Face.

The size of Phi-3 models is well-suited to be used for on-device inference and makes lightweight model development like fine-tuning or customization on AI PCs and edge devices possible. Intel client hardware is accelerated through comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch used for local research and development and OpenVINO™ Toolkit for model deployment and inference.

Intel is committed to meet the generative AI needs of its enterprise customers and will continue to support and optimize software for Phi-3 and other leading state-of-the-art language models.

Disclaimer: The information contained in each press release posted on this site was factually accurate on the date it was issued. While these press releases and other materials remain on the Company's website, the Company assumes no duty to update the information to reflect subsequent developments. Consequently, readers of the press releases and other materials should not rely upon the information as current or accurate after their issuance dates.