I need to find partial derivatives for:

$L=frac{1}{2}sum_{i=1}^{n} w_{i}^{2}sigma_{i}^{2} -lambda left( sum_{i=1}^{n} w_{i} bar{r_{i}} – bar{r} right) -mu left( sum_{i=1}^{n} w_{i}-1 right)$

with 5 variables, I get 5 partial derivatives $frac{partial L}{partial w_{1}}$, $frac{partial L}{partial w_{2}}$, $frac{partial L}{partial w_{3}}$, $frac{partial L}{partial lambda}$ and $frac{partial L}{partial mu}$ with which I need to solve for $w_{1}$, $w_{2}$ and $w_{3}$. For all $i$, $sigma_{i}$ is known. I don't want to manually differentiate; it could be more cumbersome with more restrictions. Later, I need to find the point where the minimum variance is located; i.e. to find out the optimal values for $lambda$, $mu$ and $w_{i}$ for all $i$ where $iin{1,2,3}$. I need to vary the 5 variables to find the optimal solutions (a bit like in Excel but no such thing currently available) and then show the frontier graphically. So my question is which program would you use to solve such thing?

**Contents**hide

#### Best Answer

Just a remark: is there an inequality constraint $sum_{i=1}^{n} w_{i} leq 1$ in your system? I merely ask because $mu$ is usually the KKT multiplier for inequality constraints. If indeed there is an inequality constraint, you will need to satisfy more conditions than just $nabla L = 0$ to attain optimality (i.e. dual feasibility, complementarity slackness conditions etc.) In which case, you'd be better off using a proper optimization solver. I don't know if $bar r_{i}, bar r$ are constants and if the Hessian is positive semidefinite, but if so, this looks like a quadratic program (QP), and you can get solvers for this type of problem (e.g. `quadprog`

in MATLAB).

But, if you know what you're doing…. here are some ways to get derivatives.

**Finite differencing**. You did not mention if you wanted numerical or exact derivatives. If accuracy isn't too big a deal, a finite difference perturbation is the easiest option. Choose a small enough $h$ for your application and you're all set. http://en.wikipedia.org/wiki/Numerical_differentiation#Finite_difference_formulae**Complex methods**. If you want a more accurate derivative, you can also calculate a complex derivative. http://en.wikipedia.org/wiki/Numerical_differentiation#Complex_variable_methods. Essentially, the idea is this: $$F'(x) approx frac{mathrm{Im}(F(x + ih))}{h} $$ Any programming language that implements a complex number type (e.g. Fortran) can be used to implement this. The derivative you get will be a real number. This is far more accurate than finite differencing, and for a sufficiently small $h$, you'll get a derivative that's pretty close to the exact derivative (to the limit of your machine precision). Note that you can only get 1st order derivatives using this method; it cannot be chained. Some people do interpolation to calculate 2nd order derivatives, but all the nice properties of the complex method are lost in that approach.**Automatic differentiation (AD)**. This is the fastest and most accurate technique for obtaining numerical derivatives of any order (they can be made accurate to machine precision), but it's also the most complicated from a software point of view. Most optimization modeling languages (like AMPL or GAMS) provide AD facilities to solvers. This is a whole topic unto itself, but in short, if you're using MATLAB, you can use the INTLAB toolbox to quickly and easily calculate the derivatives you need. See this page for options: http://www.autodiff.org/?module=Tools

But for a system as small and as simple as yours, I'd just do it by hand. Or use a symbolic tool like Maxima (free) or Sage (also free, front end to Maxima).

### Similar Posts:

- Solved – What are the 2nd derivatives of the log multivariate normal density
- Solved – gradient versus partial derivatives
- Solved – gradient versus partial derivatives
- Solved – Calculating the expression for the derivative of a Gaussian process
- Solved – Log – likelihood function, why does the summation sign vanish