📅  最后修改于: 2023-12-03 15:35:21.539000             🧑  作者: Mango
MSE (Mean Squared Error) is a common loss function used in machine learning for regression problems, where the goal is to predict a continuous value. It measures the average of the squared differences between the predicted and actual values. In PyTorch, this function is provided as the nn.MSELoss()
module.
loss_fn = nn.MSELoss()
output = loss_fn(predicted, actual)
predicted
: predicted tensor of shape (batch_size, *).actual
: ground truth tensor of shape (batch_size, *).batch_size
: number of images or samples in a batch.*
: represents any number of dimensions.import torch
import torch.nn as nn
# Define the tensors
predicted = torch.tensor([1.0, 2.1, 3.0, 4.2])
actual = torch.tensor([0.9, 2.2, 3.1, 4.3])
# Define the loss function
loss_fn = nn.MSELoss()
# Calculate the loss
loss = loss_fn(predicted, actual)
# Print the loss
print(loss)
Output:
tensor(0.0075)
In this tutorial, we learned how to use the MSE loss function in PyTorch. MSE loss is commonly used in regression problems, where the goal is to predict a continuous value. We also saw how to calculate the loss using the nn.MSELoss()
module in PyTorch.