Python PyTorch stack() 方法
PyTorch torch.stack()方法沿新维度连接(连接)一系列张量(两个或多个张量)。它插入新维度并沿该维度连接张量。此方法连接具有相同尺寸和形状的张量。我们也可以使用 torch.cat() 来加入张量但是这里我们讨论 torch.stack() 方法。
Syntax: torch.stack(tensors, dim=0)
Arguments:
- tensors: It’s a sequence of tensors of same shape and dimensions
- dim: It’s the dimension to insert. It’s an integer between 0 and the number of dimensions of input tensors.
Returns: It returns the concatenated tensor along a new dimension.
让我们借助一些Python 3 示例来了解 torch.stack() 方法。
示例 1:
在下面的Python示例中,我们使用torch.stack()方法连接两个一维张量。
Python3
# Python 3 program to demonstrate torch.stack() method
# for two one dimensional tensors
# importing torch
import torch
# creating tensors
x = torch.tensor([1.,3.,6.,10.])
y = torch.tensor([2.,7.,9.,13.])
# printing above created tensors
print("Tensor x:", x)
print("Tensor y:", y)
# join above tensor using "torch.stack()"
print("join tensors:")
t = torch.stack((x,y))
# print final tensor after join
print(t)
print("join tensors dimension 0:")
t = torch.stack((x,y), dim = 0)
print(t)
print("join tensors dimension 1:")
t = torch.stack((x,y), dim = 1)
print(t)
Python3
# Python 3 program to demonstrate torch.stack() method
# for two 2D tensors.
# importing torch
import torch
# creating tensors
x = torch.tensor([[1., 3., 6.], [10., 13., 20.]])
y = torch.tensor([[2., 7., 9.], [14., 21., 34.]])
# printing above created tensors
print("Tensor x:\n", x)
print("Tensor y:\n", y)
# join above tensor using "torch.stack()"
print("join tensors")
t = torch.stack((x, y))
# print final tensor after join
print(t)
print("join tensors in dimension 0:")
t = torch.stack((x, y), 0)
print(t)
print("join tensors in dimension 1:")
t = torch.stack((x, y), 1)
print(t)
print("join tensors in dimension 2:")
t = torch.stack((x, y), 2)
print(t)
Python3
# Python 3 program to demonstrate torch.stack() method
# for three one-dimensional tensors
# importing torch
import torch
# creating tensors
x = torch.tensor([1., 3., 6., 10.])
y = torch.tensor([2., 7., 9., 13.])
z = torch.tensor([4., 5., 8., 11.])
# printing above created tensors
print("Tensor x:", x)
print("Tensor y:", y)
print("Tensor z:", z)
# join above tensor using "torch.stack()"
print("join tensors:")
t = torch.stack((x, y, z))
# print final tensor after join
print(t)
print("join tensors dimension 0:")
t = torch.stack((x, y, z), dim=0)
print(t)
print("join tensors dimension 1:")
t = torch.stack((x, y, z), dim=1)
print(t)
Python3
# Python 3 program to demonstrate torch.stack() method
# for one-dimensional tensors
# importing torch
import torch
# creating tensors
x = torch.tensor([1., 3., 6., 10.])
y = torch.tensor([2., 7., 9.])
# printing above created tensors
print("Tensor x:", x)
print("Tensor y:", y)
# join above tensor using "torch.stack()"
print("join tensors:")
t = torch.stack((x, y))
# print final tensor after join
print(t)
print("join tensors dimension 0:")
t = torch.stack((x, y), dim=0)
print(t)
print("join tensors dimension 1:")
t = torch.stack((x, y), dim=1)
print(t)
输出:
Tensor x: tensor([ 1., 3., 6., 10.])
Tensor y: tensor([ 2., 7., 9., 13.])
join tensors:
tensor([[ 1., 3., 6., 10.],
[ 2., 7., 9., 13.]])
join tensors dimension 0:
tensor([[ 1., 3., 6., 10.],
[ 2., 7., 9., 13.]])
join tensors dimension 1:
tensor([[ 1., 2.],
[ 3., 7.],
[ 6., 9.],
[10., 13.]])
解释:在上面的代码中,张量 x 和 y 是一维的,每个都有四个元素。最终的级联张量是一个二维张量。由于维度为 1,我们可以堆叠维度为 0 和 1 的张量。当 dim =0 时,张量堆叠增加行数。当 dim =1 时,张量沿列转置和堆叠。
示例 2:
在下面的Python示例中,我们使用torch.stack()方法连接两个一维张量。
Python3
# Python 3 program to demonstrate torch.stack() method
# for two 2D tensors.
# importing torch
import torch
# creating tensors
x = torch.tensor([[1., 3., 6.], [10., 13., 20.]])
y = torch.tensor([[2., 7., 9.], [14., 21., 34.]])
# printing above created tensors
print("Tensor x:\n", x)
print("Tensor y:\n", y)
# join above tensor using "torch.stack()"
print("join tensors")
t = torch.stack((x, y))
# print final tensor after join
print(t)
print("join tensors in dimension 0:")
t = torch.stack((x, y), 0)
print(t)
print("join tensors in dimension 1:")
t = torch.stack((x, y), 1)
print(t)
print("join tensors in dimension 2:")
t = torch.stack((x, y), 2)
print(t)
输出:
Tensor x:
tensor([[ 1., 3., 6.],
[10., 13., 20.]])
Tensor y:
tensor([[ 2., 7., 9.],
[14., 21., 34.]])
join tensors
tensor([[[ 1., 3., 6.],
[10., 13., 20.]],
[[ 2., 7., 9.],
[14., 21., 34.]]])
join tensors in dimension 0:
tensor([[[ 1., 3., 6.],
[10., 13., 20.]],
[[ 2., 7., 9.],
[14., 21., 34.]]])
join tensors in dimension 1:
tensor([[[ 1., 3., 6.],
[ 2., 7., 9.]],
[[10., 13., 20.],
[14., 21., 34.]]])
join tensors in dimension 2:
tensor([[[ 1., 2.],
[ 3., 7.],
[ 6., 9.]],
[[10., 14.],
[13., 21.],
[20., 34.]]])
解释:在上面的代码中,x 和 y 是二维张量。请注意,最终张量是 3-D 张量。由于每个输入张量的维度为 2,我们可以将维度为 0 和 2 的张量堆叠起来。查看 dim = 0、1 和 2 的最终输出张量之间的差异。
示例 3:
在这个例子中,我们加入了两个以上的张量。我们可以加入任意数量的张量。
Python3
# Python 3 program to demonstrate torch.stack() method
# for three one-dimensional tensors
# importing torch
import torch
# creating tensors
x = torch.tensor([1., 3., 6., 10.])
y = torch.tensor([2., 7., 9., 13.])
z = torch.tensor([4., 5., 8., 11.])
# printing above created tensors
print("Tensor x:", x)
print("Tensor y:", y)
print("Tensor z:", z)
# join above tensor using "torch.stack()"
print("join tensors:")
t = torch.stack((x, y, z))
# print final tensor after join
print(t)
print("join tensors dimension 0:")
t = torch.stack((x, y, z), dim=0)
print(t)
print("join tensors dimension 1:")
t = torch.stack((x, y, z), dim=1)
print(t)
输出:
Tensor x: tensor([ 1., 3., 6., 10.])
Tensor y: tensor([ 2., 7., 9., 13.])
Tensor z: tensor([ 4., 5., 8., 11.])
join tensors:
tensor([[ 1., 3., 6., 10.],
[ 2., 7., 9., 13.],
[ 4., 5., 8., 11.]])
join tensors dimension 0:
tensor([[ 1., 3., 6., 10.],
[ 2., 7., 9., 13.],
[ 4., 5., 8., 11.]])
join tensors dimension 1:
tensor([[ 1., 2., 4.],
[ 3., 7., 5.],
[ 6., 9., 8.],
[10., 13., 11.]])
示例 4:演示错误
在下面的示例中,当输入张量的形状不同时,我们会显示错误。
Python3
# Python 3 program to demonstrate torch.stack() method
# for one-dimensional tensors
# importing torch
import torch
# creating tensors
x = torch.tensor([1., 3., 6., 10.])
y = torch.tensor([2., 7., 9.])
# printing above created tensors
print("Tensor x:", x)
print("Tensor y:", y)
# join above tensor using "torch.stack()"
print("join tensors:")
t = torch.stack((x, y))
# print final tensor after join
print(t)
print("join tensors dimension 0:")
t = torch.stack((x, y), dim=0)
print(t)
print("join tensors dimension 1:")
t = torch.stack((x, y), dim=1)
print(t)
输出:
Shape of x: torch.Size([4])
Shape of y: torch.Size([3])
RuntimeError: stack expects each tensor to be equal size, but got [4] at entry 0 and [3] at entry 1
请注意,两个张量的形状并不相同。它会引发运行时错误。同样,当张量的维度不同时,它会引发运行时错误。自己尝试不同维度的张量,看看输出如何。