Loading a torchscript model in C++: Cannot initialize CUDA without ATen_cuda library

  • 1 year ago
Comment

ChatGPTChatGPT
1 / 1

If you encounter the error "Cannot initialize CUDA without ATen_cuda library" when trying to load a TorchScript model in C++ and you are not using CUDA, it may be due to the configuration settings of your project. Here are a few steps you can try to resolve the issue:

  1. Ensure that you are building your C++ project without CUDA support. Check your build configuration and make sure that CUDA-related flags and libraries are not included.

  2. Make sure that you have properly installed PyTorch and its C++ libraries. Ensure that you have installed the CPU-only version of PyTorch if you do not have a compatible GPU for CUDA.

  3. Verify that you are linking against the correct version of the ATen library. Make sure that your project is configured to link against the appropriate ATen library that corresponds to your PyTorch installation.

  4. If you are using CMake, double-check your CMakeLists.txt file and ensure that the appropriate libraries and include directories are specified. You may need to set the CMake variables TORCH_LIBRARIES and TORCH_INCLUDE_DIRS to point to the correct paths.

  5. If you are using a custom build or installation of PyTorch, ensure that it was built correctly and that all necessary dependencies are properly installed.

If the issue persists, it may be helpful to provide more details about your project setup, including your C++ code and build configuration, as well as the version of PyTorch you are using.