Solved – Learning a quadratic function using TensorFlow/Keras

Heads up: I'm not sure if this is the best place to post this question, so let me know if there is somewhere better suited.

I am trying to train a simple neural network to learn a simple quadratic function of the form:
$f(x) = 5 – 3x + 2x^2$

I set up a single-layered network with a single neuron.
The input is a 2d array of the form
$(x, x^2)$
and I don't use an activation function. I expect that the weights and biases I extract from the network will correspond to the coefficients in the function $f(x)$.

I randomly generate some training points and labels, as well as a validation data set, and train my model using the Keras sequential model called from TensorFlow.

import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt  def fTest(x_arg):     return 5 - 3*x_arg + 2*(x_arg)**2   # training data t = np.random.choice(np.arange(-10,10, .01),5000 ) t1 = [] for i in range(len(t)):     t1.append([t[i], t[i]**2]) s = [] for i in range(len(t)):     s.append(fTest(t[i])) t1 = np.array(t1) s = np.array(s)  # validation set v = np.random.choice(np.arange(-10,10, .01),5000 ) v1 = [] for i in range(len(v)):     v1.append([v[i], v[i]**2]) u = [] for i in range(len(v)):     u.append(fTest(v[i])) v1 = np.array(v1) u = np.array(u)  model = keras.Sequential([     keras.layers.Dense(1, input_shape=(2,) , use_bias=True), ])  model.compile(optimizer='adam',                loss='mean_squared_logarithmic_error',               metrics=['mae','accuracy'])  model.fit(t1, s, batch_size=50, epochs=2000, validation_data=(v1,u))  

The model seems to train, but very poorly. The 'accuracy' metric is also zero, which I am very confused about.

Epoch 2000/2000 200/200 [==============================] - 0s 23us/step - loss: 0.0018 - mean_absolute_error: 1.0144 - acc: 0.0000e+00 - val_loss: 0.0014 - val_mean_absolute_error: 1.0276 - val_acc: 0.0000e+00 

Visually, the predictions of the model seem to be reasonably accurate

enter image description here

I've tried other loss-functions but none of them seem to work any better. I'm fairly new to to using TF/Keras so is there something obvious that I'm missing?

Edit: corrected training output

You're misunderstanding what , in the sense Keras implements, measures. Accuracy measures the proportion of samples correctly predicted, such as allocating images into classes (e.g. "dog," "cat," or "fish").

In your problem, you're not trying to infer class membership, you just want the predictions to be close to the true values. It's not surprising that accuracy is poor, because accuracy doesn't measure anything relevant about this problem.

Similar Posts:

Rate this post

Leave a Comment