📜  Tensorflow.js tf.train.adamax()函数

📅  最后修改于: 2022-05-13 01:56:44.319000             🧑  作者: Mango

Tensorflow.js tf.train.adamax()函数

Tensorflow.js 是谷歌开发的一个开源库,用于在浏览器或节点环境中运行机器学习模型和深度学习神经网络。

我们用来创建使用 adamax 算法的 tf.AdamaxOptimizer 的tf.train.adamax()函数。

句法:

tf.train.adamax(learningRate, beta1, beta2, epsilon, decay)

参数:

  • learningRate:它指定 adamax 梯度下降算法将使用的学习率。
  • beta1:它指定第一时刻的估计指数衰减率。
  • beta2:它指定第二时刻的估计指数衰减率。
  • epsilon:它为数值稳定性指定了一个小常数。
  • 衰减:它指定每次更新的衰减率。

返回值:它返回一个 tf.adamaxOptimizer。

示例 1:通过学习系数 a 和 b,使用 adamax 优化器拟合函数f = (a*x + y)。

Javascript
// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
 
const xs = tf.tensor1d([0, 1, 2, 3]);
const ys = tf.tensor1d([1.1, 5.9, 16.8, 33.9]);
 
// Choosing random coefficients
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
 
// Defining function f = (a*x + b).
const f = x => a.mul(x).add(b);
const loss = (pred, label) => pred.sub(label).square().mean();
 
// Defining learning rate of adamax algorithm
const learningRate = 0.01;
 
// Creating our optimizer.
const optimizer = tf.train.adamax(learningRate);
 
// Train the model.
for (let i = 0; i < 10; i++) {
   optimizer.minimize(() => loss(f(xs), ys));
}
 
// Make predictions.
console.log(
     `a: ${a.dataSync()}, b: ${b.dataSync()}}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
   console.log(`x: ${i}, pred: ${pred}`);
});


Javascript
// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
 
const xs = tf.tensor1d([0, 1, 2, 3]);
const ys = tf.tensor1d([1.1, 5.9, 16.8, 33.9]);
 
// Choosing random coefficients
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
 
// Defining function f = (a*x^2 + b*x + c).
const f = x => a.mul(x).add(b);
const loss = (pred, label) => pred.sub(label).square().mean();
 
// Defining configurations of adamax algorithm
const learningRate = 0.01;
const beta1 = 0.1;
const beta2 = 0.1;
const epsilon = 0.3;
const decay = 0.5;
 
// Creating our optimizer.
const optimizer = tf.train.adamax(
    learningRate, beta1, beta2, epsilon, decay);
 
// Train the model.
for (let i = 0; i < 10; i++) {
   optimizer.minimize(() => loss(f(xs), ys));
}
 
// Make predictions.
console.log(
     `a: ${a.dataSync()}, b: ${b.dataSync()}}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
   console.log(`x: ${i}, pred: ${pred}`);
});


输出:

a: 0.4271160364151001, b: 0.21284617483615875}
x: 0, pred: 0.21284617483615875
x: 1, pred: 0.6399621963500977
x: 2, pred: 1.0670782327651978
x: 3, pred: 1.4941942691802979

示例 2:通过学习系数 a、b 和 c,使用 adamax optimiszer 拟合二次方程。我们优化器的配置如下:

  • 学习率 = 0.01;
  • β1 = 0.1;
  • β2 = 0.1;
  • ε = 0.3;
  • 衰减 = 0.5;

Javascript

// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
 
const xs = tf.tensor1d([0, 1, 2, 3]);
const ys = tf.tensor1d([1.1, 5.9, 16.8, 33.9]);
 
// Choosing random coefficients
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
 
// Defining function f = (a*x^2 + b*x + c).
const f = x => a.mul(x).add(b);
const loss = (pred, label) => pred.sub(label).square().mean();
 
// Defining configurations of adamax algorithm
const learningRate = 0.01;
const beta1 = 0.1;
const beta2 = 0.1;
const epsilon = 0.3;
const decay = 0.5;
 
// Creating our optimizer.
const optimizer = tf.train.adamax(
    learningRate, beta1, beta2, epsilon, decay);
 
// Train the model.
for (let i = 0; i < 10; i++) {
   optimizer.minimize(() => loss(f(xs), ys));
}
 
// Make predictions.
console.log(
     `a: ${a.dataSync()}, b: ${b.dataSync()}}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
   console.log(`x: ${i}, pred: ${pred}`);
});

输出:

a: 0.8346626162528992, b: 0.5925931334495544}
x: 0, pred: 0.21284617483615875
x: 1, pred: 1.4272557497024536
x: 2, pred: 2.261918306350708
x: 3, pred: 3.096580982208252

参考: https://js.tensorflow.org/api/1.0.0/#train.adamax