November 23, 2024

MediaBizNet

Complete Australian News World

Nvidia bans use of translation layers for CUDA software to run on other chipsets – new restrictions appear to target some Chinese GPU makers and ZLUDA

Nvidia bans use of translation layers for CUDA software to run on other chipsets – new restrictions appear to target some Chinese GPU makers and ZLUDA

Nvidia has now banned CUDA-based software from running on other hardware platforms using translation layers in its updated license terms. This appears to be designed to prevent the ZLUDA Initiative and, perhaps more importantly, some Chinese GPU makers from using CUDA code with translation layers. We have sent a message to Nvidia for comment and will update you with additional details or clarification when we hear back.

Longhorns, Software Engineer, note the updated terms. “You may not reverse engineer, decompile, or disassemble any portion of the output generated using Software Elements for the purpose of translating such output elements to target a non-Nvidia platform,” reads a new provision in CUDA 11.5.

Being a leader has a good side and a bad side. On the one hand, everyone depends on you; On the other hand, everyone wants to stand on your shoulders. The latter appears to be what happened with CUDA. Because the combination of CUDA and Nvidia hardware has proven incredibly effective, a lot of software relies on it. However, as more competitive hardware enters the market, more users tend to run their CUDA programs on competing platforms. There are two ways to do this: recompile the code (available to developers of the software involved) or use the translation layer.

For obvious reasons, using a translation layer like ZLUDA is the easiest way to run CUDA software on non-Nvidia hardware. All one has to do is take the already compiled binaries and run them using ZLUDA or other translation layers. ZLUDA seems to be floundering now, with both AMD and Intel missing the opportunity to develop it further, but that doesn't mean the translation isn't viable.

READ  The Nintendo Switch OLED and Xbox Series S in jet black are on sale now

Several Chinese GPU makers, including a Chinese government-funded company, claim to be running CUDA code. Denglin Technology designs processors that feature a “computing architecture compatible with programming models such as CUDA/OpenCL.” Since an Nvidia GPU is difficult to reverse engineer (unless one somehow has all the low-level details about Nvidia GPU architectures), we're likely dealing with some sort of translation layer here as well.

Moore Threads, one of the largest Chinese GPU manufacturers, has a MUSIFY compiler designed to allow CUDA code to work with its GPUs. However, it remains to be seen whether or not MUSIFY falls within the classification of a full translation layer (some aspects of MUSIFY may involve transport code). As such, it's not entirely clear whether Nvidia's ban on translation layers is a direct response to these initiatives or a preemptive strike against future developments.

For obvious reasons, the use of translation layers threatens Nvidia's dominance in the field of accelerated computing, especially with AI applications. This is probably the motivation behind Nvidia's decision to ban its CUDA implementations from running on other hardware platforms using translation layers starting with CUDA 11.5.

This statement was absent in CUDA 11.4, so running applications compiled with CUDA 11.4 and earlier compilers on non-Nvidia processors using translation layers appears to still be fine. To that end, Nvidia will not achieve its goal of preventing everyone from running software developed for its devices on other hardware platforms using layers like ZLUDA in the short term. In the long term, the company will certainly create legal barriers to running CUDA software via translation layers on third-party hardware, which could have a positive impact on Nvidia and a negative impact on AMD, Intel, Biren and other AI computing hardware developers. .

READ  Fresh renders of the Galaxy S23 Ultra show a flat screen

Recompilation of existing CUDA programs remains completely legal. To simplify this, both AMD and Intel have tools to port CUDA programs to their computers Rookm (1) And Open API The platforms are straight.

As AMD, Intel, Tenstorrent, and other companies develop better hardware, more software developers will tend to design for these platforms, and Nvidia CUDA's dominance may decline over time. Furthermore, software developed and compiled specifically for specific processors will inevitably perform better than software that runs through translation layers, which means a better competitive position for AMD, Intel, Tenstorrent and others against Nvidia – if they can engage software developers. GPGPU remains an important and highly competitive arena, and we will monitor how the situation evolves in the future.