Tensorflow.js tf.train.rmsprop()函数
Tensorflow.js 是谷歌开发的一个开源库,用于在浏览器或节点环境中运行机器学习模型和深度学习神经网络。
tf.train.rmsprop()函数用于创建使用 RMSProp 梯度下降算法的 tf.RMSPropOptimizer。 RMSProp 优化器的实现不是 RMSProp 的中心版本,它使用普通的动量。
句法:
tf.train.rmsprop(learningRate, decay, momentum, epsilon, centered)
参数:
- learningRate(数字):它指定adadelta梯度下降算法将使用的学习率。
- 衰减(数字):它指定每个梯度的衰减率。
- 动量(数字):它指定 rmsprop 梯度下降算法将使用的动量。
- epsilon:它指定一个恒定的小值,用于避免分母为零。
- centered (boolean):它指定梯度是否通过估计的梯度方差进行归一化。
返回值:它返回一个 tf.RMSPropOptimizer
示例 1:通过学习系数 a 和 b,使用 RMSProp 优化器拟合函数f=(a*x+y)。
Javascript
// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
const xs = tf.tensor1d([0, 1, 2]);
const ys = tf.tensor1d([1.1, 5.9, 16.8]);
// Choosing random coefficients.
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
// Defining function f = (a*x + b). We will use
// optimizer to fit f
const f = x => a.mul(x).add(b);
const loss = (pred, label) => pred.sub(label).square().mean();
// Define rate which will be used by rmsprop algorithm
const learningRate = 0.01;
// Create optimizer
const optimizer = tf.train.rmsprop(learningRate);
// Train the model.
for (let i = 0; i < 8; i++) {
optimizer.minimize(() => loss(f(xs), ys));
}
// Make predictions.
console.log(
`a: ${a.dataSync()}, b: ${b.dataSync()}}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
console.log(`x: ${i}, pred: ${pred}`);
});
Javascript
// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
const xs = tf.tensor1d([0, 1, 2, 3]);
const ys = tf.tensor1d([1.1, 5.9, 16.8, 33.9]);
// Choosing random coefficients
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
const c = tf.scalar(Math.random()).variable();
// Defining function f = (a*x^2 + b*x + c)
const f = x => a.mul(x.square()).add(b.mul(x)).add(c);
const loss = (pred, label) => pred.sub(label).square().mean();
// Setting configurations for our optimizer
const learningRate = 0.01;
const decay = 0.1;
const momentum = 1;
const epsilon = 0.5;
const centered = true;
// Create the ptimizer
const optimizer = tf.train.rmsprop(learningRate,
decay, momentum, epsilon, centered);
// Train the model.
for (let i = 0; i < 8; i++) {
optimizer.minimize(() => loss(f(xs), ys));
}
// Make predictions.
console.log(`a: ${a.dataSync()},
b: ${b.dataSync()}, c: ${c.dataSync()}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
console.log(`x: ${i}, pred: ${pred}`);
});
输出:
a:0.9164762496948242, b: 1.0887205600738525}
x: 0, pred: 1.0887205600738525
x: 1, pred: 2.0051968097686768
x: 2, pred: 2.921673059463501
示例 2:通过学习系数 a、b 和 c,使用 RMSProp 优化器拟合二次方程。优化器将具有以下配置:
- 学习率 = 0.01
- 衰减 = 0.1
- 动量 = 1
- ε = 0.5
- 居中=真
Javascript
// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
const xs = tf.tensor1d([0, 1, 2, 3]);
const ys = tf.tensor1d([1.1, 5.9, 16.8, 33.9]);
// Choosing random coefficients
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
const c = tf.scalar(Math.random()).variable();
// Defining function f = (a*x^2 + b*x + c)
const f = x => a.mul(x.square()).add(b.mul(x)).add(c);
const loss = (pred, label) => pred.sub(label).square().mean();
// Setting configurations for our optimizer
const learningRate = 0.01;
const decay = 0.1;
const momentum = 1;
const epsilon = 0.5;
const centered = true;
// Create the ptimizer
const optimizer = tf.train.rmsprop(learningRate,
decay, momentum, epsilon, centered);
// Train the model.
for (let i = 0; i < 8; i++) {
optimizer.minimize(() => loss(f(xs), ys));
}
// Make predictions.
console.log(`a: ${a.dataSync()},
b: ${b.dataSync()}, c: ${c.dataSync()}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
console.log(`x: ${i}, pred: ${pred}`);
});
输出:
a: 3.918823003768921, b: 3.333444833755493, c: 6.297145843505859
x: 0, pred: 6.297145843505859
x: 1, pred: 13.549413681030273
x: 2, pred: 28.639328002929688
x: 3, pred: 51.56688690185547
参考: https://js.tensorflow.org/api/1.0.0/#train.rmsprop