📜  Python – 使用 Pytorch 的矩阵乘法

📅  最后修改于: 2022-05-13 01:55:28.516000             🧑  作者: Mango

Python – 使用 Pytorch 的矩阵乘法

矩阵乘法是科学计算的一个组成部分。当矩阵的大小很大时,它变得复杂。轻松计算两个矩阵的乘积的方法之一是使用 PyTorch 提供的方法。本文介绍如何使用 PyTorch 执行矩阵乘法。

PyTorch 和张量:

它是一个可用于基于神经网络的深度学习项目的包。它是由 Facebook 的 AI 研究团队开发的开源库。它可以用 GPU 的强大功能取代 NumPy。这个库提供的重要类之一是Tensor 它只不过是 NumPy 包提供的 n 维数组。 PyTorch 中有很多方法可以应用于 Tensor,这使得计算变得更快更容易。张量只能保存相同数据类型的元素。

使用 PyTorch 进行矩阵乘法:

PyTorch 中的方法期望输入是 Tensor,而 PyTorch 和 Tensor 可用于矩阵乘法的方法是:

  1. 火炬.mm()。
  2. 火炬.matmul()。
  3. 火炬.bmm()
  4. @运算符。

火炬.mm():

该方法通过采用m×n张量和n×p张量来计算矩阵乘法。它只能处理二维矩阵,不能处理一维矩阵。此函数不支持广播。广播只不过是当张量的形状不同时处理它们的方式。广播较小的张量以适应更宽或更大的张量的形状以进行操作。该函数的语法如下所示。

参数是两个张量,第三个是可选参数。可以在那里给出另一个张量来保存输出值。

示例 1:相同维度的矩阵

这里两个输入的维度相同。因此,输出也将具有相同的维度。

Python3
import torch as t
  
mat_1 = torch.tensor([[1, 2, 3],
                      [4, 3, 8],
                      [1, 7, 2]])
  
mat_2 = torch.tensor([[2, 4, 1],
                      [1, 3, 6],
                      [2, 6, 5]])
  
torch.mm(mat_1, mat_2, out=None)


Python3
import torch as t
  
mat_1 = torch.tensor([[1, 2],
                      [4, 3]])
  
mat_2 = torch.tensor([[2, 4, 1],
                      [1, 3, 6]])
  
torch.mm(mat_1, mat_2, out=None)


Python3
import torch as t
  
# both arguments 1D
vec_1 = torch.tensor([3, 6, 2])
vec_2 = torch.tensor([4, 1, 9])
  
print("Single dimensional tensors :", torch.matmul(vec_1, vec_2))
  
# both arguments 2D
mat_1 = torch.tensor([[1, 2, 3],
                      [4, 3, 8],
                      [1, 7, 2]])
  
mat_2 = torch.tensor([[2, 4, 1],
                      [1, 3, 6],
                      [2, 6, 5]])
  
out = torch.matmul(mat_1, mat_2)
  
print("\n3x3 dimensional tensors :\n", out)


Python3
import torch
  
# first argument 1D and second argument 2D
mat1_1 = torch.tensor([3, 6, 2])
  
mat1_2 = torch.tensor([[1, 2, 3],
                       [4, 3, 8],
                       [1, 7, 2]])
  
out_1 = torch.matmul(mat1_1, mat1_2)
print("\n1D-2D multiplication :\n", out_1)
  
# first argument 2D and second argument 1D
mat2_1 = torch.tensor([[2, 4, 1],
                       [1, 3, 6],
                       [2, 6, 5]])
  
mat2_2 = torch.tensor([4, 1, 9])
  
# assigning to output tensor
out_2 = torch.matmul(mat2_1, mat2_2)
  
print("\n2D-1D multiplication :\n", out_2)


Python3
import torch
  
# creating Tensors using randn()
mat_1 = torch.randn(2, 3, 3)
mat_2 = torch.randn(3)
  
# printing the matrices
print("matrix A :\n", mat_1)
print("\nmatrix B :\n", mat_2)
  
# output
print("\nOutput :\n", torch.matmul(mat_1, mat_2))


Python3
import torch
  
# 3D matrices
mat_1 = torch.randn(2, 3, 3)
mat_2 = torch.randn(2, 3, 4)
  
print("matrix A :\n",mat_1)
print("\nmatrix B :\n",mat_2)
  
print("\nOutput :\n",torch.bmm(mat_1,mat_2))


Python3
# single dimensional matrices
oneD_1 = torch.tensor([3, 6, 2])
oneD_2 = torch.tensor([4, 1, 9])
  
  
# two dimensional matrices
twoD_1 = torch.tensor([[1, 2, 3],
                       [4, 3, 8],
                       [1, 7, 2]])
twoD_2 = torch.tensor([[2, 4, 1],
                       [1, 3, 6],
                       [2, 6, 5]])
  
# N-dimensional matrices (N>2)
  
# 2x3x3 dimensional matrix
ND_1 = torch.tensor([[[-0.0135, -0.9197, -0.3395],
                      [-1.0369, -1.3242,  1.4799],
                      [-0.0182, -1.2917,  0.6575]],
  
                     [[-0.3585, -0.0478,  0.4674],
                      [-0.6688, -0.9217, -1.2612],
                      [1.6323, -0.0640,  0.4357]]])
  
# 2x3x4 dimensional matrix
ND_2 = torch.tensor([[[0.2431, -0.1044, -0.1437, -1.4982],
                      [-1.4318, -0.2510,  1.6247,  0.5623],
                      [1.5265, -0.8568, -2.1125, -0.9463]],
  
                     [[0.0182,  0.5207,  1.2890, -1.3232],
                      [-0.2275, -0.8006, -0.6909, -1.0108],
                      [1.3881, -0.0327, -1.4890, -0.5550]]])
  
print("1D matrices output :\n", oneD_1 @ oneD_2)
print("\n2D matrices output :\n", twoD_1 @ twoD_2)
print("\nN-D matrices output :\n", ND_1 @ ND_2)
print("\n Mixed matrices output :\n", oneD_1 @ twoD_1 @ twoD_2)


输出:

tensor([[10, 28, 28],
        [27, 73, 62],
        [13, 37, 53]])

Example2:不同维度的矩阵

这里 tensor_1 是 2×2 维的,tensor_2 是 2×3 维的。所以输出将是 2×3。

蟒蛇3

import torch as t
  
mat_1 = torch.tensor([[1, 2],
                      [4, 3]])
  
mat_2 = torch.tensor([[2, 4, 1],
                      [1, 3, 6]])
  
torch.mm(mat_1, mat_2, out=None)

输出:

tensor([[1.4013e-45, 0.0000e+00, 2.8026e-45],
        [0.0000e+00, 5.6052e-45, 0.0000e+00]])

火炬.matmul():

该方法允许计算两个向量矩阵(单维矩阵)、二维矩阵和混合矩阵的乘法。此方法还支持广播和批处理操作。根据输入矩阵的维度,决定要完成的操作。下面给出了一般语法。

下表列出了参数的各种可能维度以及基于它的操作。

             argument_1               

         argument_2                    

      Action taken                                                                      

1-dimensional1-dimensionalThe scalar product is calculated
2-dimensional2-dimensionalGeneral matrix multiplication is done
1-dimensional2-dimensionalThe tensor-1 is pretended with a ‘1’ to match dimension of tensor-2
2-dimensional1-dimensionalMatrix-vector product is calculated
1/N-dimensional (N>2)1/N-dimensional (N>2)Batched matrix multiplication is done

示例 1:相同维度的参数

蟒蛇3

import torch as t
  
# both arguments 1D
vec_1 = torch.tensor([3, 6, 2])
vec_2 = torch.tensor([4, 1, 9])
  
print("Single dimensional tensors :", torch.matmul(vec_1, vec_2))
  
# both arguments 2D
mat_1 = torch.tensor([[1, 2, 3],
                      [4, 3, 8],
                      [1, 7, 2]])
  
mat_2 = torch.tensor([[2, 4, 1],
                      [1, 3, 6],
                      [2, 6, 5]])
  
out = torch.matmul(mat_1, mat_2)
  
print("\n3x3 dimensional tensors :\n", out)

输出:

Single dimensional tensors : tensor(36)

3x3 dimensional tensors :
 tensor([[10, 28, 28],
        [27, 73, 62],
        [13, 37, 53]])

Example2:不同维度的参数

蟒蛇3

import torch
  
# first argument 1D and second argument 2D
mat1_1 = torch.tensor([3, 6, 2])
  
mat1_2 = torch.tensor([[1, 2, 3],
                       [4, 3, 8],
                       [1, 7, 2]])
  
out_1 = torch.matmul(mat1_1, mat1_2)
print("\n1D-2D multiplication :\n", out_1)
  
# first argument 2D and second argument 1D
mat2_1 = torch.tensor([[2, 4, 1],
                       [1, 3, 6],
                       [2, 6, 5]])
  
mat2_2 = torch.tensor([4, 1, 9])
  
# assigning to output tensor
out_2 = torch.matmul(mat2_1, mat2_2)
  
print("\n2D-1D multiplication :\n", out_2)

输出:

1D-2D multiplication :
 tensor([29, 38, 61])

2D-1D multiplication :
 tensor([21, 61, 59])

示例 3:N 维参数 (N>2)

蟒蛇3

import torch
  
# creating Tensors using randn()
mat_1 = torch.randn(2, 3, 3)
mat_2 = torch.randn(3)
  
# printing the matrices
print("matrix A :\n", mat_1)
print("\nmatrix B :\n", mat_2)
  
# output
print("\nOutput :\n", torch.matmul(mat_1, mat_2))

输出:

matrix A :
 tensor([[[ 0.5433,  0.0546, -0.5301],
         [ 0.9275, -0.0420, -1.3966],
         [-1.1851, -0.2918, -0.7161]],

        [[-0.8659,  1.8350,  1.6068],
         [-1.1046,  1.0045, -0.1193],
         [ 0.9070,  0.7325, -0.4547]]])

matrix B :
 tensor([ 1.8785, -0.4231,  0.1606])

Output :
 tensor([[ 0.9124,  1.5358, -2.2177],
        [-2.1448, -2.5191,  1.3208]])

火炬.bmm():

该方法为要相乘的矩阵只有 3 维 (x×y×z) 且两个矩阵的第一维 (x) 必须相同的情况提供批量矩阵乘法。这不支持广播。语法如下。

“确定性”参数采用布尔值。 ' false ' 进行非确定性的更快计算。 “ true ”的计算速度较慢,但它是确定性的。

例子:

在下面的示例中,matrix_1 的维度为 2×3×3。第二个矩阵的维度为 2×3×4。

蟒蛇3

import torch
  
# 3D matrices
mat_1 = torch.randn(2, 3, 3)
mat_2 = torch.randn(2, 3, 4)
  
print("matrix A :\n",mat_1)
print("\nmatrix B :\n",mat_2)
  
print("\nOutput :\n",torch.bmm(mat_1,mat_2))

输出:

matrix A :
 tensor([[[-0.0135, -0.9197, -0.3395],
         [-1.0369, -1.3242,  1.4799],
         [-0.0182, -1.2917,  0.6575]],

        [[-0.3585, -0.0478,  0.4674],
         [-0.6688, -0.9217, -1.2612],
         [ 1.6323, -0.0640,  0.4357]]])

matrix B :
 tensor([[[ 0.2431, -0.1044, -0.1437, -1.4982],
         [-1.4318, -0.2510,  1.6247,  0.5623],
         [ 1.5265, -0.8568, -2.1125, -0.9463]],

        [[ 0.0182,  0.5207,  1.2890, -1.3232],
         [-0.2275, -0.8006, -0.6909, -1.0108],
         [ 1.3881, -0.0327, -1.4890, -0.5550]]])

Output :
 tensor([[[ 0.7954,  0.5231, -0.7752, -0.1756],
         [ 3.9031, -0.8274, -5.1288, -0.5915],
         [ 2.8488, -0.2372, -3.4850, -1.3212]],

        [[ 0.6532, -0.1637, -1.1251,  0.2633],
         [-1.5532,  0.4309,  1.6527,  2.5167],
         [ 0.6492,  0.8870,  1.4994, -2.3371]]])

** 注意:由于随机值是动态填充的,因此每次运行的矩阵都会有所不同。

@运算符:

@ – Simon H运算符,当应用于矩阵时,对一维矩阵执行逐元素乘法,对二维矩阵执行普通矩阵乘法。如果两个矩阵具有相同的维度,则矩阵乘法正常执行,无需任何广播/前置。如果任何一个矩阵的维度不同,则先进行适当的广播,然后进行乘法。此运算符也适用于 N 维矩阵。

例子:

蟒蛇3

# single dimensional matrices
oneD_1 = torch.tensor([3, 6, 2])
oneD_2 = torch.tensor([4, 1, 9])
  
  
# two dimensional matrices
twoD_1 = torch.tensor([[1, 2, 3],
                       [4, 3, 8],
                       [1, 7, 2]])
twoD_2 = torch.tensor([[2, 4, 1],
                       [1, 3, 6],
                       [2, 6, 5]])
  
# N-dimensional matrices (N>2)
  
# 2x3x3 dimensional matrix
ND_1 = torch.tensor([[[-0.0135, -0.9197, -0.3395],
                      [-1.0369, -1.3242,  1.4799],
                      [-0.0182, -1.2917,  0.6575]],
  
                     [[-0.3585, -0.0478,  0.4674],
                      [-0.6688, -0.9217, -1.2612],
                      [1.6323, -0.0640,  0.4357]]])
  
# 2x3x4 dimensional matrix
ND_2 = torch.tensor([[[0.2431, -0.1044, -0.1437, -1.4982],
                      [-1.4318, -0.2510,  1.6247,  0.5623],
                      [1.5265, -0.8568, -2.1125, -0.9463]],
  
                     [[0.0182,  0.5207,  1.2890, -1.3232],
                      [-0.2275, -0.8006, -0.6909, -1.0108],
                      [1.3881, -0.0327, -1.4890, -0.5550]]])
  
print("1D matrices output :\n", oneD_1 @ oneD_2)
print("\n2D matrices output :\n", twoD_1 @ twoD_2)
print("\nN-D matrices output :\n", ND_1 @ ND_2)
print("\n Mixed matrices output :\n", oneD_1 @ twoD_1 @ twoD_2)

输出:

1D matrices output :
 tensor(36)

2D matrices output :
 tensor([[10, 28, 28],
        [27, 73, 62],
        [13, 37, 53]])

N-D matrices output :
 tensor([[[ 0.7953,  0.5231, -0.7751, -0.1757],
         [ 3.9030, -0.8274, -5.1287, -0.5915],
         [ 2.8487, -0.2372, -3.4850, -1.3212]],

        [[ 0.6531, -0.1637, -1.1250,  0.2633],
         [-1.5532,  0.4309,  1.6526,  2.5166],
         [ 0.6491,  0.8869,  1.4995, -2.3370]]])

 Mixed matrices output:
 tensor([218, 596, 562])