Skip to content(if available)orjump to list(if available)

Apex GPU: Run CUDA Apps on AMD GPUs Without Recompilation

AMDAnon

Despite being vibecoded, swapping out cuda for another shared library is technically sound.

Probably violates EULAs though which is why AMD has HIP.

throwaway2027

"Wow i make a bridge that allows CUDA on AMD. What have you EVER done in your pathetic life? Oh you gave your sisters herpes, Thats sad." - ArchitectAI

He deleted this comment in response to bigyabai after getting flagged.

throwaway2027

Holy AI Slop

ArchitectAI

I built a lightweight (93KB) CUDA→AMD translation layer using LD_PRELOAD.

It intercepts CUDA API calls at runtime and translates them to HIP/rocBLAS/MIOpen.

No source code needed. No recompilation. Just:

  LD_PRELOAD=./libapex_hip_bridge.so ./your_cuda_app

 
Currently supports:

- 38 CUDA Runtime functions

- 15+ cuBLAS operations (matrix multiply, etc)

- 8+ cuDNN operations (convolutions, pooling, batch norm)

- PyTorch training and inference

Built in ~10 hours using dlopen/dlsym for dynamic loading. 100% test pass rate.

The goal: break NVIDIA's CUDA vendor lock-in and make AMD GPUs viable for

existing CUDA workloads without months of porting effort.

bigyabai

> ## First Comment (Expand on technical details)

> Post this as your first comment after submitting:

lmfao

null

[deleted]

null

[deleted]

ArchitectAI

[flagged]