Friday, Jan 28 2022 | Updated at 06:18 AM EST

Stay Connected With Us F T R

Nov 15, 2016 01:20 PM EST

Radeon Open Compute Platform 1.3 Has Been Launched [VIDEO]

Close

AMD has announced that the Radeon Open Compute Platform or ROCm 1.3 has been launched. Heterogeneous Compute Compiler and Heterogeneous Compute Interface for Portability have made significant process.

SC16 has begun with AMD

According to the official website of Radeon Open Compute, the annual ACM/IEEE sponsored supercomputing conference in the United States called SC has begun this week. AMD has started to create their ecosystem that could overcome and even interact with CUDA technology. They want to close the software gap between their company and Nvidia.

AMD unleashes ROCm 1.3

AMD is both currently updating the participants on the current state of Boltzmann and providing the latest software update to their project. It is now under the name of Radeon Open Compute Platform or ROCm for short. During SC16, it was announced that ROCm 1.3 has been launched and it has become closer to completing the Boltzmann Initiative.

The AMD manufacturer released the initial 1.0 version of the platform in April. The first version was only relative to the complete scope of the Boltzmann Initiative, which was only a small part of it. Due to that, the earlier releases of the ROCm were not ready or were at least in the beta version.

HCC and HIP makes significant progress

One year later after the first introduction of the ROCm, AMD is proud to say that the Heterogeneous Compute Compiler and the Heterogeneous Compute Interface for Portability have made significant progress. With version 1.3, ROCm has now introduced the shipping version of the native compiler, which his based around LLVM. The native compiler is the most important part of the Boltzmann plan since it is the key to making HPC software work on AMD's platform.

Defining HCC and HIP

According to another post on the official website of Radeon Open Compute, HCC is the C++ dialect with extensions to launch kernels and manage accelerator memory. HIP is another C++ dialect, which was designed to make it easier to convert CUDA applications into portable C++ code. HIP can be used for new projects that might need portability between AMD and Nvidia.

Check out AMD's Breakthrough Performance of Next Generation Zen video below:

See Now: Facebook will use AI to detect users with suicidal thoughts and prevent suicide

© 2017 University Herald, All rights reserved. Do not reproduce without permission.

Join the Conversation

Get Our FREE Newsletters

Stay Connected With Us F T R

Real Time Analytics