I've implemented a neural network for prediction, and for the input data, I've used the following formula to normalize data:

`Data_normalized_i= [Data_i - Min_data]/[Max_Data- Min_data] `

I've some questions:

- How do I interpret the output of my network according to my inputs?
- Must I use the real data input to compare it with my outputs?
- If I have to do some transformation of my outputs, how should I go about doing it? For the test error in this case, will be it calculated from the output or from the transformed outputs?

**Contents**hide

#### Best Answer

What the output of your neurons can be depends on the objective function you use and the activation function for the output neurons. For example, if you use sum of squared errors (regression), then one can prove that the output of the network is conditional average of the target data conditioned on the input. With equations,

$$y_{k}left(mathbf{x},mathbf{w}right) = int t_{k} p(t_{k}|mathbf{x})dt_{k}$$

where $k$ is the indicator for the neuron, $x$ is the input vector, $t$ is the target vector and $y(x,w)$ is the mapping carried out by the network.

If you use the cross entropy error function with sigmoidal output units (classification), then the output of each neuron is the probability that the sample corresponds to the class encoded by the neuron. A brief discussion and derivation of this result can be found here.

Try to get a copy of the book for a detailed description. It's a great book and you will learn a lot.

That said, how you transform your outputs (if it makes sense) depends on what you are doing and how those outputs are to be understood. You don't explain that in your question