Computing is spreading across heterogeneous fabrics of CPUs, GPUs, application accelerators, interconnect processors, edge-computing devices and FPGAs – all of which require persistent memory and software to bind these elements into a complete solution. The race is on to generate, store and analyze data at zettascale levels. It took over 12 years to get from petascale to exascale computing. Intel has challenged itself to make it to zetta in five years: zetta 2027. Central to this goal is Intel’s work with the open ecosystem to ensure developers have optimized tools and software environments to accelerate their deployments.

Intel and SiPearl are collaborating to provide a joint platform for the first European exascale supercomputers. SiPearl is designing the microprocessor used in European exascale supercomputers and has selected Intel’s Ponte Vecchio graphics processing units (GPU) as the high performance computing (HPC) accelerator within the system’s HPC node. This partnership will offer European customers the possibility to combine SiPearl’s high-performance and low-power central processing unit (“Rhea”) with Intel’s family of general-purpose GPUs to make a high-performance compute node fostering European exascale deployments.
To enable this powerful combination, SiPearl plans to port and optimize oneAPI for the Rhea processor. As an open, standards-based programming model, oneAPI increases developer productivity and workload performance by providing a single programming solution across the heterogeneous compute node. The paired solution will also underline the value of Compute Express Link standardization in connecting compute elements, providing lower latency, coherent connectivity relative to PCIe connections.

Prepping Developer Ecosystem for Next-Gen Intel Xeon Scalable Processors
Intel is working with the open-source community and its large pool of ecosystem partners to simplify the process for developers to build on next-generation Intel® Xeon® Scalable® processors (code-named “Sapphire Rapids”) and harness several of the new acceleration engines built into the processor. Next-gen Xeon processors are designed to tackle overhead in data-center-scale deployment models while enabling greater processor core utilization and reducing power and area costs. A new Intel® Accelerator Interfacing Architecture (AIA) instruction set built into the processor helps to support efficient dispatch, synchronization and signaling to discrete accelerators. Developers will also have access to several of the performance-enhancing accelerator engines within next-gen Xeon processors, including Intel® Advanced Matrix Extensions (AMX), Intel® QuickAssist Technology and Intel® Dynamic Load Balancer.
About Intel
From our partners:
Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.
© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Our humans need coffee too! Your support is highly appreciated, thank you!