PostgresML is capable of leveraging GPUs when the underlying libraries and hardware are properly configured on the database server. The CUDA runtime is statically linked during the build process, so it does not introduce additional dependencies on the runtime host.
Models trained on GPU may also require GPU support to make predictions. Consult the documentation for each library on configuring training vs inference.
GPU setup for Tensorflow is covered in the documentation. You may acquire pre-trained GPU enabled models for fine tuning from Hugging Face.
GPU setup for Torch is covered in the documentation. You may acquire pre-trained GPU enabled models for fine tuning from Hugging Face.
GPU setup for Flax is covered in the documentation. You may acquire pre-trained GPU enabled models for fine tuning from Hugging Face.
GPU setup for XGBoost is covered in the documentation.
content_copy
pgml.train(
'GPU project',
algorithm => 'xgboost',
hyperparams => '{"tree_method" : "gpu_hist"}'
);
GPU setup for LightGBM is covered in the documentation.
content_copy
pgml.train(
'GPU project',
algorithm => 'lightgbm',
hyperparams => '{"device" : "cuda"}'
);
None of the scikit-learn algorithms natively support GPU devices. There are a few projects to improve scikit performance with additional parallelism, although we currently have not integrated these with PostgresML:
If your project would benefit from GPU support, please consider opening an issue, so we can prioritize integrations.