GPU Support
PostgresML is capable of leveraging GPUs when the underlying libraries and hardware are properly configured on the database server. The CUDA runtime is statically linked during the build process, so it does not introduce additional dependencies on the runtime host.
Models trained on GPU may also require GPU support to make predictions. Consult the documentation for each library on configuring training vs inference.
Tensorflow
GPU setup for Tensorflow is covered in the documentation. You may acquire pre-trained GPU enabled models for fine tuning from Hugging Face.
Torch
GPU setup for Torch is covered in the documentation. You may acquire pre-trained GPU enabled models for fine tuning from Hugging Face.
Flax
GPU setup for Flax is covered in the documentation. You may acquire pre-trained GPU enabled models for fine tuning from Hugging Face.
XGBoost
GPU setup for XGBoost is covered in the documentation.
pgml.train(
'GPU project',
algorithm => 'xgboost',
hyperparams => '{"tree_method" : "gpu_hist"}'
);
LightGBM
GPU setup for LightGBM is covered in the documentation.
pgml.train(
'GPU project',
algorithm => 'lightgbm',
hyperparams => '{"device" : "cuda"}'
);
Scikit-learn
None of the scikit-learn algorithms natively support GPU devices. There are a few projects to improve scikit performance with additional parallelism, although we currently have not integrated these with PostgresML:
If your project would benefit from GPU support, please consider opening an issue, so we can prioritize integrations.
Have Questions?
Join our Discord and ask us anything! We're friendly and would love to talk about PostgresML.
Try It Out
Try PostresML using our free serverless cloud. It comes with GPUs, 5 GiB of space and plenty of datasets to get you started.