📜  pytorch 使用多个 gpu - Python 代码示例

📅  最后修改于: 2022-03-11 14:46:27.076000             🧑  作者: Mango

代码示例1
#easiest solution is to wrap you model in DataParallel like so:

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
  print("Let's use", torch.cuda.device_count(), "GPUs!")
  model = nn.DataParallel(model)

model.to(device)