GPU training (Basic)

您所在的位置:网站首页 lightningup GPU training (Basic)

GPU training (Basic)

2024-01-03 01:48| 来源: 网络整理| 查看: 265

Find usable CUDA devices¶

If you want to run several experiments at the same time on your machine, for example for a hyperparameter sweep, then you can use the following utility function to pick GPU indices that are “accessible”, without having to change your code every time.

from lightning.pytorch.accelerators import find_usable_cuda_devices # Find two GPUs on the system that are not already occupied trainer = Trainer(accelerator="cuda", devices=find_usable_cuda_devices(2)) from lightning.fabric.accelerators import find_usable_cuda_devices # Works with Fabric too fabric = Fabric(accelerator="cuda", devices=find_usable_cuda_devices(2))

This is especially useful when GPUs are configured to be in “exclusive compute mode”, such that only one process at a time is allowed access to the device. This special mode is often enabled on server GPUs or systems shared among multiple users.



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3