At the output of the final layer of yolo, a leaky-relu is applied to the output, so if we have negative values for the width and height, the cost function will return a null value since we would have square rooted a negative value at the second sum of the cost. Thus not able to update the weights using back prop.

Am I wrong about this or is there something I am missing here? If I am not wrong, how do we guarantee the width and height to be positive?

**Contents**hide

#### Best Answer

According to their source code, actually they use an `exp`

operation to ensure $w$ and $h$ are non-negative values.

`box get_region_box(float *x, float *biases, int n, int index, int i, int j, int w, int h, int stride) { box b; b.x = (i + x[index + 0*stride]) / w; b.y = (j + x[index + 1*stride]) / h; b.w = exp(x[index + 2*stride]) * biases[2*n] / w; b.h = exp(x[index + 3*stride]) * biases[2*n+1] / h; return b; } `

Here `w`

and `h`

are width and height of the network input, `b.w`

and `b.h`

are normalized width and height of the bonding box, `x`

is last layer's output. It's not very clear what `biases`

are though.

### Similar Posts:

- Solved – How to a network with only ReLU nodes output negative values
- Solved – multiple output layer in tensorflow
- Solved – How does the YOLO network create boundaries for object detection
- Solved – Approximating leaky ReLU with a differentiable function
- Solved – YOLO loss function width and height component explanation