A heterogeneous OpenCL implementation of AutoDock Vina

Related tags

Graphics Vina-GPU
Overview

Vina-GPU

A heterogeneous OpenCL implementation of AutoDock Vina

Compiling and Running

Note: at least one GPU card is required and make sure the version of GPU driver is up to date

Windows

Run on the executable file

  1. For the first time to use Vina-GPU, please run Vina-GPU-K.exe with command ./Vina-GPU-K.exe --config=./input_file_example/2bm2_config.txt You are supposed to have the docking results 2bm2_out.pdbqt of our example complex and a Kernel2_Opt.bin file
  2. Once you have the Kernel2_Opt.bin file, you can run Vina-GPU.exe without compiling the kernel files (thus to save more runtime)

When you run Vina-GPU.exe, please make sure Kernel2_Opt.bin file are in the same directory

For the usage and limitaiton of Vina-GPU, please check Usage and Limitation. A graphic user interface (GUI) is also provided for Windows users, please check GUI

Build from source file

Visual Studio 2019 is recommended for build Vina-GPU from source

  1. install boost library (current version is 1.77.0)

  2. install CUDA Toolkit (current version is v11.5) if you are using NVIDIA GPU cards

    Note: the OpenCL library can be found in CUDA installation path for NVIDIA or in the driver installation path for AMD

  3. add ./lib ./OpenCL/inc $(YOUR_BOOST_LIBRARY_PATH)/boost $(YOUR_CUDA_TOOLKIT_LIBRARY_PATH)/CUDA/v11.5/include in the include directories

  4. add $(YOUR_BOOST_LIBRARY_PATH)/stage/lib $(YOUR_CUDA_TOOLKIT_PATH)/CUDA/lib/Win32in the addtional library

  5. add OpenCL.lib in the additional dependencies

  6. add --config=./input_file_example/2bm2_config.txt in the command arguments

  7. add WIN32 in the preprocessor definitions if necessary

  8. if you want to compile the binary kernel file on the fly, add BUILD_KERNEL_FROM_SOURCE in the preprocessor definitions

  9. build & run

Note: ensure the line ending are CLRF

Linux

  1. install boost library (current version is 1.77.0)

  2. install CUDA Toolkit (current version is 11.5) if you are using NVIDIA GPU cards

    note: OpenCL library can be usually in /usr/local/cuda (for NVIDIA GPU cards)

  3. change the BOOST_LIB_PATH and OPENCL_LIB_PATH accordingly in Makefile

  4. set GPU platform GPU_PLATFORM and OpenCL version OPENCL_VERSIONin Makefile. some options are given below:

    note: -DOPENCL_3_0 is highly recommended in Linux

Macros Options Descriptions
GPU_PLATFORM -DNVIDIA_PLATFORM / -DAMD_PLATFORM NVIDIA / AMD GPU platform
OPENCL_VERSION -DOPENCL_3_0 / -DOPENCL_1_2 OpenCL version 3.0 / 1.2
  1. type make clean and make source to build Vina-GPU that compile the kernel files on the fly (this would take some time at the first use)

  2. after a successful compiling, Vina-GPU can be seen in the directory

  3. type ./Vina-GPU --config ./input_file_example/2bm2_config.txt to run Vina-GPU

  4. once you successfully run Vina-GPU, its runtime can be further reduced by typing make clean and make to build it without compiling kernel files (but make sure the Kernel2_Opt.bin file is unchanged)

  5. other compile options:

Options Description
-g debug
-DDISPLAY_ADDITION_INFO print addition information

macOS

Note: The running Vina-GPU on macOS is not recommended and has not fully tested yet

  1. install boost library (current version is 1.77.0)

modify Makefile as follows:

  1. annotate OPENCL_LIB_PATH, OPENCL_INC_PATH and -L$(OPENCL_LIB_PATH)/lib64
  2. add -framework OpenCL in LIB3
  3. type make and run

Usage

Arguments Description Default value
--config the config file (in .txt format) that contains all the following arguments for the convenience of use no default
--receptor the recrptor file (in .pdbqt format) no default
--ligand the ligand file (in .pdbqt fotmat) no default
--thread the scale of parallelism (docking lanes) 1000
--search_depth the number of searching iterations in each docking lane heuristically determined
--center_x/y/z the center of searching box in the receptor no default
--size_x/y/z the volume of the searching box no default

Limitation

Arguments Description Limitation
--thread the scale of parallelism (docking lanes) preferably less than 10000
--size_x/y/z the volume of the searching box less than 30/30/30

Graphic User Interface (GUI)

A graphic user interface (GUI) is provided for users on Windows OS

  1. first make sure that Vina-GPU.exe can run on a terminal
  2. put the Vina-GPU.exe and Kernel2_Opt.bin files in ./Vina-GPU/GUI/exec and overwrite the original files
  3. run the Vina-GPU-GUI.exefile within ./Vina-GPU/GUI to start up the Vina-GPU GUI
  4. select the input and out output files
  5. set the box center, the box size, thread and search_depth
  6. click the start button to run Vina-GPU

Citation

  • Shidi, Tang, Chen Ruiqi, Lin Mengru, Lin Qingde, Zhu Yanxiang, Wu Jiansheng, Hu Haifeng, and Ling Ming. "Accelerating AutoDock VINA with GPUs." ChemRxiv (2021). ), doi: 10.33774/chemrxiv-2021-3qvn2-v2

  • O. Trott, A. J. Olson, AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization and multithreading, Journal of Computational Chemistry 31 (2010) 455-461.

Comments
  • Segmentation Fault (Probably just an OpenCL Issue)

    Segmentation Fault (Probably just an OpenCL Issue)

    Alright, I'm trying to install the software on a High Performance Computer, however when running it with the example test file, it gives a segmentation fault. Here is the output.

    /Vina-GPU --config ./input_file_example/2bm2_config.txt
    Reading input ... done.
    Setting up the scoring function ... done.
    Analyzing the binding site ... done.
    Using random seed: 721271727
    
    Platform: NVIDIA CUDA
    Device: Tesla V100S-PCIE-32GB
    
    Device: Tesla V100S-PCIE-32GB
    
    Device: Tesla V100S-PCIE-32GB
    
    Device: Tesla V100S-PCIE-32GB
    
    Build kernels from source
    OpenCL version: 3.0
    Search depth is set to 6
    Segmentation fault (core dumped)
    

    I put it into the debugger and got this as an output.

    Device: Tesla V100S-PCIE-32GB
    [New Thread 0x2aaaaf02b700 (LWP 38126)]
    [New Thread 0x2aaaaf22c700 (LWP 38128)]
    [New Thread 0x2aaaaf42d700 (LWP 38129)]
    [New Thread 0x2aaaaf62e700 (LWP 38130)]
    [New Thread 0x2aaaaf82f700 (LWP 38131)]
    [New Thread 0x2aaaafa30700 (LWP 38132)]
    
    Build kernels from source
    OpenCL version: 3.0
    [New Thread 0x2aaaafc31700 (LWP 38135)]
    Search depth is set to 6
    
    Program received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x2aaaaf02b700 (LWP 38126)]
    0x00002aaaadcc5fa0 in ?? () from /lib64/libnvidia-opencl.so.1
    Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 nvidia-driver-branch-495-cuda-libs-495.29.05-1.el7.x86_64 ocl-icd-2.2.12-1.el7.x86_64
    (gdb) bt
    #0  0x00002aaaadcc5fa0 in ?? () from /lib64/libnvidia-opencl.so.1
    #1  0x00002aaaadde74c0 in ?? () from /lib64/libnvidia-opencl.so.1
    #2  0x00002aaaadd8974f in ?? () from /lib64/libnvidia-opencl.so.1
    #3  0x00002aaaadc8649c in ?? () from /lib64/libnvidia-opencl.so.1
    #4  0x00002aaaaddfef96 in ?? () from /lib64/libnvidia-opencl.so.1
    #5  0x00002aaaadd83376 in ?? () from /lib64/libnvidia-opencl.so.1
    #6  0x00002aaaabc53ea5 in start_thread () from /lib64/libpthread.so.0
    #7  0x00002aaaac17eb0d in clone () from /lib64/libc.so.6
    

    The computer hardware specs are,

    • 4 NVIDIA Tesla V100's
    • 2x AMD EPYC 7601 processors (32 cores each with hyper-threading)

    We are running this in "CentOS Linux 7 (Core)".

    I will continue to investigate this issue further. However by the looks of things, this looks more like an issue with our Cuda or other libraries we are linking to instead of an issue with Vina-GPU.

    opened by papaSneetch 19
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    When I install the software ,I meet the problem

    Reading input ... done.
    Setting up the scoring function ... done.
    Analyzing the binding site ... done.
    Using random seed: -181661601
    
    Platform: NVIDIA CUDA
    Platform 0 device name:Tesla V100-SXM2-16GB
    
    Platform 0 global memory size:16.945512 GB
    
    Platform 0 local memory size:49.152000 KB
    
    Search depth is set to 5
    Segmentation fault (core dumped)
    

    Have you tested your program in the A100 (Ampere Architecture) + Linux ?If did ,could you tell me the version of CUDA, GCC , boost, OS .

    stack overflow file not found 
    opened by ZeroDesigner 14
  • Err-30:CL_INVALID_VALUE

    Err-30:CL_INVALID_VALUE

    I've recently compiled Vina-GPU however when I run the command: ./Vina-GPU --config ./input_file_example/2bm2_config.txt

    I get the error: `Reading input ... done. Setting up the scoring function ... done. Analyzing the binding site ... done. Using random seed: -108247774

    Platform: NVIDIA CUDA Device: Tesla T4

    Search depth is set to 6 Err-30:CL_INVALID_VALUE`

    My gcc and g++ are both 10.3.0 and cuda is 11.1 (installed via conda)

    Background to how I've compiled it:

    1. I've installed the following packages conda install -c conda-forge rdkit conda install -c hcc adfr-suite

    2. Downloaded and decompressed Boost and Vina-GPU and ran ulimit -s 8192

    3. Changed the Makefile content as: BOOST_LIB_PATH=/content/boost_1_77_0 OPENCL_LIB_PATH=/usr/local/cuda and then cd Vina-GPU-main/ && make clean && make I've got several warning and deprecation messages but no errors and "Vina-GPU" was generated.

    Any ideas about what might be causing the problem?

    opened by naeemmrz 5
  • Error in compilation

    Error in compilation

    Hi, I am not able to compile the code on Linux system. When I do make source, I get the following error messages:

    /usr/bin/ld: cannot find -lboost_program_options /usr/bin/ld: cannot find -lboost_system /usr/bin/ld: cannot find -lboost_filesystem collect2: error: ld returned 1 exit status make: *** [source] Error 1

    Because I do not have sudo privileges, I downloaded boost 1.80 and stored it under the root folder of the repository. I have also installed CUDA toolkit (11.7) under upper folder relative to the repository. I have changed Makefile accordingly, so its first several lines read:

    BOOST_LIB_PATH=boost_1_80_0 OPENCL_LIB_PATH=../cuda_11_7 OPENCL_VERSION=-DOPENCL_3_0 GPU_PLATFORM=-DNVIDIA_PLATFORM

    Using the same settings I actually can compile the code in an Ubuntu 20.04.2LTS machine which I have root privileges. But I cannot compile the code on the cluster that has Red Hat 4.8.5-44. I am using GCC version 10.2.

    Any help is greatly appreciated!

    opened by JerryJohnsonLee 4
  • exhaustiveness and threads relations

    exhaustiveness and threads relations

    Hi! I'm using CPU version of Vina (v 1.2.3) with exhaustiveness=8 and all available CPU on linux (12 cores on AMD 5990x). then I use Vina-GPU, how much "threads" i need to set to get the same (or related) docking accuracy? how exhaustiveness *cpus is related to threads?

    opened by igor611 3
  • Segmentation Fault

    Segmentation Fault

    (base) [dalg@localhost Vina-GPU-main]$ ./Vina-GPU --config ./input_file_example/2bm2_config.txt ################################################################# # If you used Vina-GPU in your work, please cite: # #Shidi, Tang, Chen Ruiqi, Lin Mengru, Lin Qingde, # #Zhu Yanxiang, Wu Jiansheng, Hu Haifeng, and Ling Ming. # #Accelerating AutoDock VINA with GPUs. ChemRxiv (2021).Print. # # And also the origin AutoDock Vina paper: # # O. Trott, A. J. Olson, # # AutoDock Vina: improving the speed and accuracy of docking # # with a new scoring function, efficient optimization and # # multithreading, Journal of Computational Chemistry 31 (2010) # # 455-461 # # DOI 10.1002/jcc.21334 # #################################################################

    Reading input ... done. Setting up the scoring function ... done. Analyzing the binding site ... done. Using random seed: 1167282562 段错误 (核心已转储)

    OS centos 8 stream CUDA 11.4 GPU Tesla V100-PCIE gcc 8.5.0

    opened by greatlse 3
  • Segmentation Fault on Input File Example

    Segmentation Fault on Input File Example

    When I ran this command, ./Vina-GPU --config ./input_file_example/2bm2_config.txt

    I got this output,

    Reading input ... done.
    Setting up the scoring function ... done.
    Analyzing the binding site ... done.
    Using random seed: -1907456286
    
    Platform: NVIDIA CUDA
    Search depth is set to 5
    Segmentation fault (core dumped)`
    
    I am using Ubuntu 20.04, made sure the boost and cuda tool kit was the exact same version as was said in the readme.
    
    I put the same command in the debugger and got this for an output.
    
    `Using random seed: -705894464
    [New Thread 0x7ffff5bf6700 (LWP 2570)]
    
    Platform: NVIDIA CUDA[New Thread 0x7fffde835700 (LWP 2571)]
    [New Thread 0x7fffde034700 (LWP 2572)]
    [New Thread 0x7fffdd833700 (LWP 2573)]
    [New Thread 0x7fffdd032700 (LWP 2574)]
    [New Thread 0x7fffdc831700 (LWP 2575)]
    [New Thread 0x7fffd7fff700 (LWP 2576)]
    [New Thread 0x7fffd77fe700 (LWP 2577)]
    
    [New Thread 0x7fffd6ffd700 (LWP 2578)]
    Search depth is set to 5
    
    Thread 2 "Vina-GPU" received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7ffff5bf6700 (LWP 2570)]
    0x00007ffff78444a7 in __GI_fseek (fp=0x0, offset=0, whence=2) at fseek.c:35
    35	fseek.c: No such file or directory.
    (gdb) bt
    #0  0x00007ffff78444a7 in __GI_fseek (fp=0x0, offset=0, whence=2) at fseek.c:35
    #1  0x00005555555f3116 in SetupBuildProgramWithBinary(_cl_context*, _cl_device_id**, char const*) ()
    #2  0x00005555555b765a in monte_carlo::operator()(model&, boost::ptr_vector<output_type, boost::heap_clone_allocator, void>&, precalculate const&, igrid const&, precalculate const&, igrid const&, vec const&, vec const&, incrementable*, boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>&) const ()
    #3  0x00005555555c5e77 in boost::detail::thread_data<boost::reference_wrapper<parallel_for<parallel_iter<parallel_mc_aux, boost::ptr_vector<parallel_mc_task, boost::heap_clone_allocator, void>, parallel_mc_task, true>::aux, true>::aux> >::run() ()
    #4  0x00005555555f49c2 in thread_proxy ()
    #5  0x00007ffff79ce609 in start_thread (arg=<optimized out>) at pthread_create.c:477
    #6  0x00007ffff78d8293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
    (gdb) f 6
    #6  0x00007ffff78d8293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
    95	../sysdeps/unix/sysv/linux/x86_64/clone.S: No such file or directory.
    
    opened by papaSneetch 3
  • Build kernels from sourceError: failed to open file

    Build kernels from sourceError: failed to open file

    Dears

    I am getting the following error:


    Platform: NVIDIA CUDA Build kernels from sourceError: failed to open file : ./OpenCL/src/kernels/code_head.cppError: failed to open file : ./OpenCL/src/kernels/mutate_conf.cppError: failed to open file : ./OpenCL/src/kernels/matrix.cppError: failed to open file : ./OpenCL/src/kernels/quasi_newton.cppError: failed to open file Segmentation fault (core dumped)


    My installation output is this:


    gcc -o Vina-GPU -I/home/zapata/boost_1_77_0 -I/home/zapata/boost_1_77_0/boost -I./lib -I./OpenCL/inc -I/usr/local/cuda-11.5/include ./main/main.cpp -O3 ./lib/.cpp ./OpenCL/src/wrapcl.cpp /home/zapata/boost_1_77_0/libs/thread/src/pthread/thread.cpp /home/zapata/boost_1_77_0/libs/thread/src/pthread/once.cpp -lboost_program_options -lboost_system -lboost_filesystem -lOpenCL -lstdc++ -lm -lpthread -L/home/zapata/boost_1_77_0/stage/lib -L/usr/local/cuda-11.5/lib64 -DOPENCL_3_0 -DNVIDIA_PLATFORM -DBUILD_KERNEL_FROM_SOURCE In file included from ./lib/monte_carlo.cpp:34: /home/zapata/boost_1_77_0/boost/progress.hpp:23:108: note: #pragma message: This header is deprecated. Use the facilities in <boost/timer/timer.hpp> or <boost/timer/progress_display.hpp> instead. BOOST_HEADER_DEPRECATED( "the facilities in <boost/timer/timer.hpp> or <boost/timer/progress_display.hpp>" ) ^ In file included from /home/zapata/boost_1_77_0/boost/progress.hpp:25, from ./lib/monte_carlo.cpp:34: /home/zapata/boost_1_77_0/boost/timer.hpp:21:70: note: #pragma message: This header is deprecated. Use the facilities in <boost/timer/timer.hpp> instead. BOOST_HEADER_DEPRECATED( "the facilities in <boost/timer/timer.hpp>" ) ^ ./lib/monte_carlo.cpp: In member function ‘void monte_carlo::operator()(model&, output_container&, const precalculate&, const igrid&, const precalculate&, const igrid&, const vec&, const vec&, incrementable, rng&) const’: ./lib/monte_carlo.cpp:481:66: warning: narrowing conversion of ‘(fl)((const monte_carlo*)this)->monte_carlo::hunt_cap.vec::operator’ from ‘fl’ {aka ‘double’} to ‘float’ inside { } [-Wnarrowing] float hunt_cap_float[3] = {hunt_cap[0], hunt_cap[1], hunt_cap[2]}; ^ ./lib/monte_carlo.cpp:481:66: warning: narrowing conversion of ‘(fl)((const monte_carlo*)this)->monte_carlo::hunt_cap.vec::operator’ from ‘fl’ {aka ‘double’} to ‘float’ inside { } [-Wnarrowing] ./lib/monte_carlo.cpp:481:66: warning: narrowing conversion of ‘(fl)((const monte_carlo*)this)->monte_carlo::hunt_cap.vec::operator’ from ‘fl’ {aka ‘double’} to ‘float’ inside { } [-Wnarrowing] ./lib/monte_carlo.cpp:482:79: warning: narrowing conversion of ‘authentic_v.vec::operator’ from ‘fl’ {aka ‘double’} to ‘float’ inside { } [-Wnarrowing] float authentic_v_float[3] = { authentic_v[0],authentic_v[1], authentic_v[2] }; ^ ./lib/monte_carlo.cpp:482:79: warning: narrowing conversion of ‘authentic_v.vec::operator’ from ‘fl’ {aka ‘double’} to ‘float’ inside { } [-Wnarrowing] ./lib/monte_carlo.cpp:482:79: warning: narrowing conversion of ‘authentic_v.vec::operator’ from ‘fl’ {aka ‘double’} to ‘float’ inside { } [-Wnarrowing] In file included from ./lib/parallel_progress.h:26, from ./lib/parallel_mc.cpp:26: /home/zapata/boost_1_77_0/boost/progress.hpp:23:108: note: #pragma message: This header is deprecated. Use the facilities in <boost/timer/timer.hpp> or <boost/timer/progress_display.hpp> instead. BOOST_HEADER_DEPRECATED( "the facilities in <boost/timer/timer.hpp> or <boost/timer/progress_display.hpp>" ) ^ In file included from /home/zapata/boost_1_77_0/boost/progress.hpp:25, from ./lib/parallel_progress.h:26, from ./lib/parallel_mc.cpp:26: /home/zapata/boost_1_77_0/boost/timer.hpp:21:70: note: #pragma message: This header is deprecated. Use the facilities in <boost/timer/timer.hpp> instead. BOOST_HEADER_DEPRECATED( "the facilities in <boost/timer/timer.hpp>" ) ^ In file included from ./lib/parallel_progress.h:26, from ./lib/parallel_progress.cpp:25: /home/zapata/boost_1_77_0/boost/progress.hpp:23:108: note: #pragma message: This header is deprecated. Use the facilities in <boost/timer/timer.hpp> or <boost/timer/progress_display.hpp> instead. BOOST_HEADER_DEPRECATED( "the facilities in <boost/timer/timer.hpp> or <boost/timer/progress_display.hpp>" ) ^ In file included from /home/zapata/boost_1_77_0/boost/progress.hpp:25, from ./lib/parallel_progress.h:26, from ./lib/parallel_progress.cpp:25: /home/zapata/boost_1_77_0/boost/timer.hpp:21:70: note: #pragma message: This header is deprecated. Use the facilities in <boost/timer/timer.hpp> instead. BOOST_HEADER_DEPRECATED( "the facilities in <boost/timer/timer.hpp>" ) ^ ./OpenCL/src/wrapcl.cpp: In function ‘_cl_program* SetupBuildProgramWithBinary(cl_context, _cl_device_id**, const char*)’: ./OpenCL/src/wrapcl.cpp:261:10: warning: ignoring return value of ‘size_t fread(void*, size_t, size_t, FILE*)’, declared with attribute warn_unused_result [-Wunused-result] fread(binary_buffer, sizeof(char), program_size, program_handle); ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


    My cuda version is 11.5 Kernel 4.15.0-162-generic Ubuntu 18.04 DOPENCL_3_0

    I would appreciate your help to figure out how to solve this issue.

    Thanks in advance!

    file not found 
    opened by tavolivos 3
  • Paramters to support docking with macrocycles

    Paramters to support docking with macrocycles

    Hi,

    Just wondering if there were any other flags or special config arugments to better support docking compounds with macrocycles? The failure rate for docking macrocycle compounds is pretty high ~14%. Is there anything more than the default parameters that might help bring this docking failure rate down?

    Thank you.

    opened by JSLJ23 2
  • Quick Clarifying Question about GPUs

    Quick Clarifying Question about GPUs

    So while testing the program we noticed that it wasn't using all of the gpus on our system. It looks like Vina-GPU is successful detecting all the gpus based on this output.

    Using random seed: 1833385468
    
    Platform: NVIDIA CUDA
    Device: Tesla V100S-PCIE-32GB
    
    Device: Tesla V100S-PCIE-32GB
    
    Device: Tesla V100S-PCIE-32GB
    
    Device: Tesla V100S-PCIE-32GB
    
    Search depth is set to 6
    

    However, Vina-GPU is using only one gpu based on some diagnostics tools that I used. It's probably something to do with how things are configured in our system that I'll need to find, however I just wanted to confirm real quick if Vina-GPU is designed to fully utilize multiple gpus in a system or just a singular one. I'll assume its designed to use multiple, however if it isn't designed with multiple gpus in mind, that would be nice to know so I don't go on a wild goose chase.

    opened by papaSneetch 2
  • How to run vina-gpu on windows 10

    How to run vina-gpu on windows 10

    How to run vina-gpu on windows 10. I tried to use this software, and it felt unusable. The error appeared Platform: NVIDIA CUDACould not open file: C:\Users\ADMINI~1\AppData\Local\Temp\dep-c25713.d. How did this error happen? ? How should I configure my computer to use vina-gpu on windows 10? Can you describe in detail how to set these steps? E.g: add ./lib ./OpenCL/inc $(YOUR_BOOST_LIBRARY_PATH)/boost $(YOUR_CUDA_TOOLKIT_LIBRARY_PATH)/CUDA/v11.5/include in the include directories;

    add $(YOUR_BOOST_LIBRARY_PATH)/stage/lib $(YOUR_CUDA_TOOLKIT_PATH)/CUDA/lib/Win32in the addtional library;

    add OpenCL.lib in the additional dependencies。

    good first issue Vina-GPU on Windows 
    opened by bioleo 2
  • Any parameters to reduce memory consumption in config.txt?

    Any parameters to reduce memory consumption in config.txt?

    Hi,

    I was wondering if there were any parameters that could be given to reduce the memory consumption of Vina-GPU during docking to avoid the Err-6:CL_OUT_OF_HOST_MEMORY as I have other concurrent processes running on my GPU and can only afford about 4GB for Vina-GPU... Thank you.

    opened by JSLJ23 0
  • fl rmsd_upper_bound(): Assertion `a.size() == b.size()` failed

    fl rmsd_upper_bound(): Assertion `a.size() == b.size()` failed

    While doing large scale virtual screening using Vina-GPU, many molecules fail because of the assertion error:

    CUDA_VISIBLE_DEVICES=1 ./Vina-GPU --receptor protein.pdbqt --ligand 1.pdbqt --num_modes 1 
    --center_x 7.2395000000000005 --center_y -0.6745000000000001 --center_z 31.628500000000003
    --size_x 9.583 --size_y 19.809 --size_z 12.779 --thread 10000 --out res.pdbqt: 
    
    Vina-GPU: ./lib/coords:31 fl rmsd_upper_bound(const vecv&, const vecv&): Assertion `a.size() == b.size()` failed
    

    @Glinttsd Do you have any idea of this? Is there any configuration that can prevent it or does retry work?

    Thanks in advance!

    opened by shazj99 1
  • Assertion `m.num_other_pairs() == 0' failed when using --flex

    Assertion `m.num_other_pairs() == 0' failed when using --flex

    Hi,

    I am running autodock-vina tutorial data (https://github.com/ccsb-scripps/AutoDock-Vina/tree/develop/example/flexible_docking) and Vina-GPU gives an error when running with the --flex option.

    Any ideas?

    Vina-GPU: ./lib/monte_carlo.cpp:309: void monte_carlo::operator()(model&, output_container&, const precalculate&, const igrid&, const precalculate&, const igrid&, const vec&, const vec&, incrementable*, rng&) const: Assertion `m.num_other_pairs() == 0' failed.

    opened by rfleiro 2
  • Need instruction about how to install dependencies on Linux system

    Need instruction about how to install dependencies on Linux system

    Hi, I tried to install all the dependencies, e.g. Boost, OpenCL, CUDA, etc., according to README instructions, but I didn't succeed.

    The hardest part I am facing is the OpenCL lib, which is not in the CUDA folder as instructed after I installed NVidia driver and CUDA library.

    I wonder if you could write in detail about how to install the dependencies on a newly provisioned Linux system (e.g. Ubuntu 20.04 or 22.04) Or could you help to provide a Dockerfile in order for us to build the Docker images?

    opened by kopwei 2
  • Err-6:CL_OUT_OF_HOST_MEMORY

    Err-6:CL_OUT_OF_HOST_MEMORY

    Dear Glinttsd I faced with this "Err-6:CL_OUT_OF_HOST_MEMORY" issue on your test system and any of mine.

    Info: Ubuntu 20.04, boost_1_80

    NVIDIA-SMI 510.60.02 Driver Version: 510.60.02 CUDA Version: 11.6 NVIDIA 3080 967MiB / 10240MiB

    total used free shared buff/cache available Mem: 31Gi 10Gi 7,9Gi 550Mi 12Gi 19Gi Swap: 2,0Gi 439Mi 1,6Gi

    opened by Golovin-Andrey 0
  • error while loading shared libraries: libboost_program_options.so.1.79.0: cannot open shared object file: No such file or directory

    error while loading shared libraries: libboost_program_options.so.1.79.0: cannot open shared object file: No such file or directory

    Hi I have problem, how to solved this: ./Vina-GPU --config ./input_file_example/2bm2_config.txt ./Vina-GPU: error while loading shared libraries: libboost_program_options.so.1.79.0: cannot open shared object file: No such file or directory

    opened by purnawanpp 1
Owner
Nanjing University of Posts and Telecommunications
Nanjing University of Posts and Telecommunications
Minimal A* implementation in C. No dynamic memory allocation.

Micro A Star Path Finder This is a minimal A* path finder implementation in C, without any dynamic memory allocation. Usage The maximum size of the ma

Felipe da Silva 96 Dec 24, 2022
Fast C++ implementation with JSI binding of MD5 for React Native

react-native-quick-md5 Brazingly fast C++ implementation with JSI binding of MD5 for React Native. Confirmed that it's 10x faster than using spark-md5

Takuya Matsuyama 81 Nov 22, 2022
An open-source implementation of Autodesk's FBX

SmallFBX An open-source implementation of Autodesk's FBX that is capable of import & export mesh, blend shape, skin, and animations. Mainly intended t

Seiya Ishibashi 43 Dec 21, 2022
A C++/DirectX 11 implementation of "A Scalable and Production Ready Sky and Atmosphere Rendering Technique"

Atmosphere Renderer A C++/DirectX 11 implementation of "A Scalable and Production Ready Sky and Atmosphere Rendering Technique" Features interactive e

Z Guan 37 Nov 20, 2022
Sandbox for graphics paper implementation

Graphics Experiments 適当にグラフィックス関連の論文などを読んで実装・検証したものを置きます。 I'll randomly put something for implementing/validating graphics papers here. 実装 / Implement

shocker-0x15 86 Dec 28, 2022
Implementation of light baking system for ray tracing based on Activision's UberBake

Vulkan Light Bakery MSU Graphics Group Student's Diploma Project Treefonov Andrey [GitHub] [LinkedIn] EARLY STAGES OF DEVELOPMENT Project Goal The goa

Andrei Treefonov 7 Dec 27, 2022
An implementation of OpenGL 3.x-ish in clean C

PortableGL "Because of the nature of Moore's law, anything that an extremely clever graphics programmer can do at one point can be replicated by a mer

Robert Winkler 652 Jan 7, 2023
Deno gl - WIP Low-level OpenGL (GLFW) bindings and WebGL API implementation for Deno.

deno_gl WIP Low-level OpenGL (GLFW) bindings and WebGL API implementation for Deno. Building Make dist directory if it doesn't exist. Build gl helper

DjDeveloper 14 Jun 11, 2022
A C++ implementation of Fast Simulation of Mass-Spring Systems

Fast Mass-Spring System Simulator A C++ implementation of Fast Simulation of Mass-Spring Systems [1], rendered with OpenGL. The dynamic inverse proced

Samer Itani 163 Dec 27, 2022
Implementation of Peter Shirley's Ray Tracing In One Weekend book using Vulkan and NVIDIA's RTX extension.

Ray Tracing In Vulkan My implementation of Peter Shirley's Ray Tracing in One Weekend books using Vulkan and NVIDIA's RTX extension (formerly VK_NV_ra

Tanguy Fautre 862 Dec 31, 2022
Source code for pbrt, the renderer described in the third edition of "Physically Based Rendering: From Theory To Implementation", by Matt Pharr, Wenzel Jakob, and Greg Humphreys.

pbrt, Version 3 This repository holds the source code to the version of pbrt that is described in the third edition of Physically Based Rendering: Fro

Matt Pharr 4.4k Jan 7, 2023
A standalone Dear ImGui node graph implementation.

ImNodes A standalone Dear ImGui node graph implementation. Library provides core features needed to create a node graph, while leaving it to the user

Rokas Kupstys 548 Dec 15, 2022
SMAA is a very efficient GPU-based MLAA implementation (DX9, DX10, DX11 and OpenGL)

SMAA is a very efficient GPU-based MLAA implementation (DX9, DX10, DX11 and OpenGL), capable of handling subpixel features seamlessly, and featuring an improved and advanced pattern detection & handling mechanism.

Jorge Jimenez 848 Dec 30, 2022
Lightweight OpenCL-Wrapper to greatly simplify OpenCL software development with C++ while keeping functionality and performance

OpenCL-Wrapper OpenCL is the most powerful programming language ever created. Yet the OpenCL C++ bindings are very cumbersome and the code overhead pr

Moritz Lehmann 92 Dec 24, 2022
Experimental OpenCL SPIR-V to OpenCL C translator

spirv2clc spirv2clc is an experimental OpenCL SPIR-V to OpenCL C translator currently targeting OpenCL 1.2 support. It can generate OpenCL C code equi

Kévin Petit 19 Oct 1, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous tasks programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, a

Taskflow 7.6k Dec 31, 2022
Benchmark framework of 3D integrated CIM accelerators for popular DNN inference, support both monolithic and heterogeneous 3D integration

3D+NeuroSim V1.0 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly av

NeuroSim 11 Dec 15, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous task programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, an

Taskflow 7.6k Dec 26, 2022
An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit

DREAMPlaceFPGA An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit. This work leverages the open-source A

Rachel Selina Rajarathnam 25 Dec 5, 2022
SuanPan - 🧮 An Open Source, Parallel and Heterogeneous Finite Element Analysis Framework

suanPan Introduction ?? suanPan is a finite element method (FEM) simulation platform for applications in fields such as solid mechanics and civil/stru

Theodore 22 Dec 27, 2022